text
stringlengths 4
602k
|
|---|
The series of experiments conducted by Rutherford and his co-worker to understand the arrangement of proton and electron in an atom is called Rutherford atomic model. It is also called as alpha particle scattering experiment.
In this experiment, a radioactive metal is placed in a lead cavity as shown in figure. The cavity is made in such a way with the silt that only a small beam of alpha particle is passed through the slit. The alpha particle is made to pass through a gold foil. To study the alpha particle after scattering, a circular movable foil is used around the gold foil. This movable foil is coated with the zinc sulphate so that alpha particles can produce flashes of light or scintillation in the foil. Some proportion of alpha particle that gets reflected through different angle is possible to determine by analyzing the different parts of zinc sulphide foil.
Following observation were made in this experiment.
- Most of the gold foil passed through the goil foil without deflection.
- Some of the alpha particles passes through the gold foil with deflection of some angle.
- Very few of the alpha particles do not pass through the foil at all but get deflected by large angle or even get rebound through an angle of 180°.
Above observation can be explained as below.
- Since most of the alpha particle passed without deflection, most of the space within the atom is empty.
- Since, some of the alpha particles were deflected though small angles, this means there is a small positively charged mass at the centre of atom. Moreover, the size of atom is extremely small in compared to the size of atom.
- The large deflection and the rebounding of alpha particle can be explained as the result of direct collision of alpha particle to the positively charged mass.
Rutherford gave the following model of atom:
- In an atom, there is a heavy positive mass in which whole mass is concentrated called nucleus. Moreover, the size of nucleus is extremely small in comparison to the size of atom. The radius of nucleus is taken in the order of 10-10m.
- The negatively charged electron revolves around the nucleus in circular path.
- The total negative charge in revolving the electron is exactly equal to the total positive charge present in the nucleus.
- The centrifugal force of revolving electron is balanced by the electrostatic attraction between the nucleus and electron.
Drawbacks of Rutherford’s atomic model:
- According to the classical theory of electrodynamics, an accelerated charged particle continuously emits electromagnetic radiation. So, the electron revolving around the nucleus most emit the electromagnetic radiation and it should gradually drift towards nucleus and finally should collapse into the nucleus. But, atom never collapse in this way.
- It could not explain the line spectra produced by hydrogen on application of strong electric or magnetic field.
|
Newton's second law: Fnet = ma
To find the net force on an object, all the vector forces that act on the object have to be added. We use free-body diagrams to help us with this task. Free-body diagrams are diagrams used to show the relative magnitude and direction of all forces acting on an object in a given situation. The object is represented by a box or some other simple shape. The forces are represented by arrows. The length of the arrow in a free-body diagram is proportional the magnitude of the force. The direction of the arrow gives the direction of the force. Each force arrow in the diagram is labeled. All forces which act on the object must be represented in the free-body diagram. A free body diagram only includes the forces that act on the object, not the forces the object itself exerts on other objects.
A free-body diagram for a freely falling ball:
Neglecting air friction, the only force acting on the ball is gravity.
A free-body diagram for a ball resting on the ground:
Gravity is acting downward. The ball is at rest. The ground must exert a force equal in magnitude and opposite in direction on the ball. This force is called the normal force, n, since it is normal to the surface.
A free-body diagram for a mass on an inclined plane:
Gravity acts downward. The component of Fg perpendicular to the surface is cancelled out by the normal force the surface exerts on the mass. The mass does not accelerate in the direction perpendicular to the surface. The component of Fg parallel to the surface causes the mass to accelerate in that direction.
Suppose that you want to move a heavy file cabinet,
which is standing in the middle of your office, into a corner. You push
on it, but nothing happens. What is going on?
You exert a force, but there is no acceleration. The net force must be zero.
Which force of equal magnitude points in a direction opposite to the direction of the force you are applying?
The force of static friction (fs) cancels the applied force when the cabinet is at rest while you are pushing on it.
You push harder. Eventually the cabinet breaks away and starts accelerating. But you have to keep on pushing just to keep it moving with a constant velocity. When you stop pushing, it quickly slows down and comes to rest. Why?
While the cabinet is moving the force of kinetic friction (fk) opposes the applied force. When it is moving with constant velocity, the two forces exactly cancel.
Where do these frictional forces come from? Frictional forces are intermolecular forces. These forces act between the molecules of two different surfaces that are in close contact with each other. On a microscopic scale, most surfaces are rough. Even surfaces that look perfectly smooth to the naked eye show many projections and dents under a microscope. The intermolecular forces are strongest where these projections and dents interlock resulting in close contact. The component of the intermolecular force normal to the surfaces provides the normal force which prevents objects from passing through each other and the component parallel to the surface is responsible for the frictional force.
Assume a cabinet is resting on the floor. Nobody is pushing on it. The net intermolecular force between the molecules of two different surfaces is normal to the surface. The force of gravity acting on the cabinet (red arrow) is balanced by the normal force from the floor acting on it (black arrow).
Now assume that you are pushing against the cabinet. The cabinet is not moving, but the surface molecules are displaced by microscopic amounts. This results in a net intermolecular force, which has a component tangential to the surface (the force of static friction). This tangential component opposes the applied force. The net force on the cabinet is zero. The harder you push the greater is the microscopic displacement of the surface molecules and the greater is the tangential component of the net intermolecular force.
When you push hard enough, some of the projections on the surfaces will break off, i.e. some of the surface molecules will be completely displaced. The horizontal component of the net intermolecular force diminishes and no longer completely opposes the applied force. The cabinet accelerates. But while the horizontal component has diminished, it has not vanished. It is now called the force of kinetic friction. For the cabinet to keep accelerating, you have to push with a force greater in magnitude than the force of kinetic friction. To keep it going with constant velocity you have to push with a force equal in magnitude to the force of kinetic friction. If you stop pushing, the force of kinetic friction will produce an acceleration in the opposite direction of the velocity, and the cabinet will slow down and stop.
The frictional force always acts between two surfaces, and opposes the relative motion of the two surfaces.
The maximum force of static friction between two surfaces is roughly proportional to the magnitude of the force pressing the two surfaces together. The proportional constant is called the coefficient of static friction μs. The magnitude of the force of static friction is always smaller than or equal to μsN, We write fs ≤ μsN, where fs is the magnitude of the frictional force and N is the magnitude of the force pressing the surfaces together. For the cabinet and the floor, N is the weight of the cabinet. The coefficient of static friction is a number (no units). The rougher the surface, the greater is the coefficient of static friction.
As long as the applied force has a magnitude smaller than μsN, the force of static friction fs has the same magnitude as the applied force, but points in the opposite direction.
The magnitude of the force of kinetic friction acting on an object is fk = μkN, where μk is the coefficient of kinetic friction. For most surfaces, μk is less than μs.
This short video clip illustrates the difference between static and kinetic friction.
A woman at an airport is towing her 20 kg suitcase
at constant speed by pulling on a strap at an angle of θ above the horizontal.
She pulls on the strap with a 35 N force, and the
frictional force on the suitcase is 20 N.
(a) Draw a free body diagram of the suitcase.
(b) What angle does the strap make with the horizontal?
(c) What normal force does the ground exert on the suitcase?
Determine the stopping distance for a skier with a speed of 20 m/s on a slope that makes an angle θ with the horizontal. Assume μk = 0.18 and θ = 5o.
Suppose you are driving a car along a highway at a high speed. Why should you avoid slamming on your brakes if you want to stop in the shortest distance? That is, why should you keep the wheels turning as you brake?
A heavy box sits in the back of a pickup truck. The truck and the box are accelerating towards the left. What is the direction of the frictional force on the box?
|
Cosmic microwave background explained
The cosmic microwave background (CMB, CMBR), in Big Bang cosmology, is electromagnetic radiation which is a remnant from an early stage of the universe, also known as "relic radiation". The CMB is faint cosmic background radiation filling all space. It is an important source of data on the early universe because it is the oldest electromagnetic radiation in the universe, dating to the epoch of recombination. With a traditional optical telescope, the space between stars and galaxies (the background) is completely dark. However, a sufficiently sensitive radio telescope shows a faint background noise, or glow, almost isotropic, that is not associated with any star, galaxy, or other object. This glow is strongest in the microwave region of the radio spectrum. The accidental discovery of the CMB in 1965 by American radio astronomers Arno Penzias and Robert Wilson was the culmination of work initiated in the 1940s, and earned the discoverers the 1978 Nobel Prize in Physics.
CMB is landmark evidence of the Big Bang origin of the universe. When the universe was young, before the formation of stars and planets, it was denser, much hotter, and filled with an opaque fog of hydrogen plasma. As the universe expanded the plasma grew cooler and the radiation filling it expanded to longer wavelengths. When the temperature had dropped enough, protons and electrons combined to form neutral hydrogen atoms. Unlike the plasma, these newly conceived atoms could not scatter the thermal radiation by Thomson scattering, and so the universe became transparent. Cosmologists refer to the time period when neutral atoms first formed as the recombination epoch, and the event shortly afterwards when photons started to travel freely through space is referred to as photon decoupling. The photons that existed at the time of photon decoupling have been propagating ever since, though growing less energetic, since the expansion of space causes their wavelength to increase over time (and wavelength is inversely proportional to energy according to Planck's relation). This is the source of the alternative term relic radiation. The surface of last scattering refers to the set of points in space at the right distance from us so that we are now receiving photons originally emitted from those points at the time of photon decoupling.
Importance of precise measurement
Precise measurements of the CMB are critical to cosmology, since any proposed model of the universe must explain this radiation. The CMB has a thermal black body spectrum at a temperature of . The spectral radiance dEν/dν peaks at 160.23 GHz, in the microwave range of frequencies, corresponding to a photon energy of about 6.626 ⋅ 10−4 eV. Alternatively, if spectral radiance is defined as dEλ/dλ, then the peak wavelength is 1.063 mm (282 GHz, 1.168 ⋅ 10−3 eV photons). The glow is very nearly uniform in all directions, but the tiny residual variations show a very specific pattern, the same as that expected of a fairly uniformly distributed hot gas that has expanded to the current size of the universe. In particular, the spectral radiance at different angles of observation in the sky contains small anisotropies, or irregularities, which vary with the size of the region examined. They have been measured in detail, and match what would be expected if small thermal variations, generated by quantum fluctuations of matter in a very tiny space, had expanded to the size of the observable universe we see today. This is a very active field of study, with scientists seeking both better data (for example, the Planck spacecraft) and better interpretations of the initial conditions of expansion. Although many different processes might produce the general form of a black body spectrum, no model other than the Big Bang has yet explained the fluctuations. As a result, most cosmologists consider the Big Bang model of the universe to be the best explanation for the CMB.
The high degree of uniformity throughout the observable universe and its faint but measured anisotropy lend strong support for the Big Bang model in general and the ΛCDM ("Lambda Cold Dark Matter") model in particular. Moreover, the fluctuations are coherent on angular scales that are larger than the apparent cosmological horizon at recombination. Either such coherence is acausally fine-tuned, or cosmic inflation occurred.
Other than the temperature and polarization anisotropy, the CMB frequency spectrum is expected to feature tiny departures from the black-body law known as spectral distortions. These are also at the focus of an active research effort with the hope of a first measurement within the forthcoming decades, as they contain a wealth of information about the primordial universe and the formation of structures at late time.
The cosmic microwave background radiation is an emission of uniform, black body thermal energy coming from all parts of the sky. The radiation is isotropic to roughly one part in 100,000: the root mean square variations are only 18 μK, after subtracting out a dipole anisotropy from the Doppler shift of the background radiation. The latter is caused by the peculiar velocity of the Sun relative to the comoving cosmic rest frame as it moves at some 369.82 ± 0.11 km/s towards the constellation Leo (galactic longitude 264.021 ± 0.011, galactic latitude 48.253 ± 0.005). The CMB dipole and aberration at higher multipoles have been measured, consistent with galactic motion.
In the Big Bang model for the formation of the universe, inflationary cosmology predicts that after about 10−37 seconds the nascent universe underwent exponential growth that smoothed out nearly all irregularities. The remaining irregularities were caused by quantum fluctuations in the inflation field that caused the inflation event. Long before the formation of stars and planets, the early universe was smaller, much hotter and, starting 10−6 seconds after the Big Bang, filled with a uniform glow from its white-hot fog of interacting plasma of photons, electrons, and baryons.
As the universe expanded, adiabatic cooling caused the energy density of the plasma to decrease until it became favorable for electrons to combine with protons, forming hydrogen atoms. This recombination event happened when the temperature was around 3000 K or when the universe was approximately 379,000 years old. As photons did not interact with these electrically neutral atoms, the former began to travel freely through space, resulting in the decoupling of matter and radiation.
The color temperature of the ensemble of decoupled photons has continued to diminish ever since; now down to, it will continue to drop as the universe expands. The intensity of the radiation corresponds to black-body radiation at 2.726 K because red-shifted black-body radiation is just like black-body radiation at a lower temperature. According to the Big Bang model, the radiation from the sky we measure today comes from a spherical surface called the surface of last scattering. This represents the set of locations in space at which the decoupling event is estimated to have occurred and at a point in time such that the photons from that distance have just reached observers. Most of the radiation energy in the universe is in the cosmic microwave background, making up a fraction of roughly of the total density of the universe.
Two of the greatest successes of the Big Bang theory are its prediction of the almost perfect black body spectrum and its detailed prediction of the anisotropies in the cosmic microwave background. The CMB spectrum has become the most precisely measured black body spectrum in nature.
The energy density of the CMB is 0.26eV/cm3 which yields about 411 photons/cm3.
See also: Discovery of cosmic microwave background radiation. The cosmic microwave background was first predicted in 1948 by Ralph Alpher and Robert Herman, in close relation to work performed by Alpher's PhD advisor George Gamow. Alpher and Herman were able to estimate the temperature of the cosmic microwave background to be 5 K, though two years later they re-estimated it at 28 K. This high estimate was due to a misestimate of the Hubble constant by Alfred Behr, which could not be replicated and was later abandoned for the earlier estimate. Although there were several previous estimates of the temperature of space, these suffered from two flaws. First, they were measurements of the temperature of space and did not suggest that space was filled with a thermal Planck spectrum. Next, they depend on our being at a special spot at the edge of the Milky Way galaxy and they did not suggest the radiation is isotropic. The estimates would yield very different predictions if Earth happened to be located elsewhere in the universe.
The 1948 results of Alpher and Herman were discussed in many physics settings through about 1955, when both left the Applied Physics Laboratory at Johns Hopkins University. The mainstream astronomical community, however, was not intrigued at the time by cosmology. Alpher and Herman's prediction was rediscovered by Yakov Zel'dovich in the early 1960s, and independently predicted by Robert Dicke at the same time. The first published recognition of the CMB radiation as a detectable phenomenon appeared in a brief paper by Soviet astrophysicists A. G. Doroshkevich and Igor Novikov, in the spring of 1964. In 1964, David Todd Wilkinson and Peter Roll, Dicke's colleagues at Princeton University, began constructing a Dicke radiometer to measure the cosmic microwave background. In 1964, Arno Penzias and Robert Woodrow Wilson at the Crawford Hill location of Bell Telephone Laboratories in nearby Holmdel Township, New Jersey had built a Dicke radiometer that they intended to use for radio astronomy and satellite communication experiments. On 20 May 1964 they made their first measurement clearly showing the presence of the microwave background, with their instrument having an excess 4.2K antenna temperature which they could not account for. After receiving a telephone call from Crawford Hill, Dicke said "Boys, we've been scooped." A meeting between the Princeton and Crawford Hill groups determined that the antenna temperature was indeed due to the microwave background. Penzias and Wilson received the 1978 Nobel Prize in Physics for their discovery.
The interpretation of the cosmic microwave background was a controversial issue in the 1960s with some proponents of the steady state theory arguing that the microwave background was the result of scattered starlight from distant galaxies. Using this model, and based on the study of narrow absorption line features in the spectra of stars, the astronomer Andrew McKellar wrote in 1941: "It can be calculated that the 'rotational temperature' of interstellar space is 2 K." However, during the 1970s the consensus was established that the cosmic microwave background is a remnant of the big bang. This was largely because new measurements at a range of frequencies showed that the spectrum was a thermal, black body spectrum, a result that the steady state model was unable to reproduce.
Harrison, Peebles, Yu and Zel'dovich realized that the early universe would require inhomogeneities at the level of 10−4 or 10−5. Rashid Sunyaev later calculated the observable imprint that these inhomogeneities would have on the cosmic microwave background. Increasingly stringent limits on the anisotropy of the cosmic microwave background were set by ground-based experiments during the 1980s. RELIKT-1, a Soviet cosmic microwave background anisotropy experiment on board the Prognoz 9 satellite (launched 1 July 1983) gave upper limits on the large-scale anisotropy. The NASA COBE mission clearly confirmed the primary anisotropy with the Differential Microwave Radiometer instrument, publishing their findings in 1992. The team received the Nobel Prize in physics for 2006 for this discovery.
Inspired by the COBE results, a series of ground and balloon-based experiments measured cosmic microwave background anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the scale of the first acoustic peak, which COBE did not have sufficient resolution to resolve. This peak corresponds to large scale density variations in the early universe that are created by gravitational instabilities, resulting in acoustical oscillations in the plasma. The first peak in the anisotropy was tentatively detected by the Toco experiment and the result was confirmed by the BOOMERanG and MAXIMA experiments. These measurements demonstrated that the geometry of the universe is approximately flat, rather than curved. They ruled out cosmic strings as a major component of cosmic structure formation and suggested cosmic inflation was the right theory of structure formation.
The second peak was tentatively detected by several experiments before being definitively detected by WMAP, which has tentatively detected the third peak. As of 2010, several experiments to improve measurements of the polarization and the microwave background on small angular scales are ongoing. These include DASI, WMAP, BOOMERanG, QUaD, Planck spacecraft, Atacama Cosmology Telescope, South Pole Telescope and the QUIET telescope.
Relationship to the Big Bang
The cosmic microwave background radiation and the cosmological redshift-distance relation are together regarded as the best available evidence for the Big Bang theory. Measurements of the CMB have made the inflationary Big Bang theory the Standard Cosmological Model. The discovery of the CMB in the mid-1960s curtailed interest in alternatives such as the steady state theory.
In the late 1940s Alpher and Herman reasoned that if there was a big bang, the expansion of the universe would have stretched the high-energy radiation of the very early universe into the microwave region of the electromagnetic spectrum, and down to a temperature of about 5 K. They were slightly off with their estimate, but they had the right idea. They predicted the CMB. It took another 15 years for Penzias and Wilson to stumble into discovering that the microwave background was actually there.
The CMB gives a snapshot of the universe when, according to standard cosmology, the temperature dropped enough to allow electrons and protons to form hydrogen atoms, thereby making the universe nearly transparent to radiation because light was no longer being scattered off free electrons. When it originated some 380,000 years after the Big Bang—this time is generally known as the "time of last scattering" or the period of recombination or decoupling—the temperature of the universe was about 3000 K. This corresponds to an energy of about 0.26 eV, which is much less than the 13.6 eV ionization energy of hydrogen.
Since decoupling, the color temperature of the background radiation has dropped by an average factor of 1090 due to the expansion of the universe. As the universe expands, the CMB photons are redshifted, causing them to decrease in energy. The color temperature of this radiation stays inversely proportional to a parameter that describes the relative expansion of the universe over time, known as the scale length. The color temperature Tr of the CMB as a function of redshift, z, can be shown to be proportional to the color temperature of the CMB as observed in the present day (2.725 K or 0.2348 meV):
Tr = 2.725 ⋅ (1 + z)
For details about the reasoning that the radiation is evidence for the Big Bang, see Cosmic background radiation of the Big Bang.
The anisotropy, or directional dependency, of the cosmic microwave background is divided into two types: primary anisotropy, due to effects that occur at the surface of last scattering and before; and secondary anisotropy, due to effects such as interactions of the background radiation with intervening hot gas or gravitational potentials, which occur between the last scattering surface and the observer.
The structure of the cosmic microwave background anisotropies is principally determined by two effects: acoustic oscillations and diffusion damping (also called collisionless damping or Silk damping). The acoustic oscillations arise because of a conflict in the photon–baryon plasma in the early universe. The pressure of the photons tends to erase anisotropies, whereas the gravitational attraction of the baryons, moving at speeds much slower than light, makes them tend to collapse to form overdensities. These two effects compete to create acoustic oscillations, which give the microwave background its characteristic peak structure. The peaks correspond, roughly, to resonances in which the photons decouple when a particular mode is at its peak amplitude.
The peaks contain interesting physical signatures. The angular scale of the first peak determines the curvature of the universe (but not the topology of the universe). The next peak—ratio of the odd peaks to the even peaks—determines the reduced baryon density. The third peak can be used to get information about the dark-matter density.
The locations of the peaks give important information about the nature of the primordial density perturbations. There are two fundamental types of density perturbations called adiabatic and isocurvature. A general density perturbation is a mixture of both, and different theories that purport to explain the primordial density perturbation spectrum predict different mixtures.
Isocurvature density perturbations:In an isocurvature density perturbation, the sum (over different types of particle) of the fractional additional densities is zero. That is, a perturbation where at some spot there is 1% more energy in baryons than average, 1% more energy in photons than average, and 2% energy in neutrinos than average, would be a pure isocurvature perturbation. Hypothetical cosmic strings would produce mostly isocurvature primordial perturbations.
- Adiabatic density perturbations:In an adiabatic density perturbation, the fractional additional number density of each type of particle (baryons, photons ...) is the same. That is, if at one place there is a 1% higher number density of baryons than average, then at that place there is a 1% higher number density of photons (and a 1% higher number density in neutrinos) than average. Cosmic inflation predicts that the primordial perturbations are adiabatic.
The CMB spectrum can distinguish between these two because these two types of perturbations produce different peak locations. Isocurvature density perturbations produce a series of peaks whose angular scales (ℓ values of the peaks) are roughly in the ratio 1 : 3 : 5 : ..., while adiabatic density perturbations produce peaks whose locations are in the ratio 1 : 2 : 3 : ... Observations are consistent with the primordial density perturbations being entirely adiabatic, providing key support for inflation, and ruling out many models of structure formation involving, for example, cosmic strings.
Collisionless damping is caused by two effects, when the treatment of the primordial plasma as fluid begins to break down:
- the increasing mean free path of the photons as the primordial plasma becomes increasingly rarefied in an expanding universe,
- the finite depth of the last scattering surface (LSS), which causes the mean free path to increase rapidly during decoupling, even while some Compton scattering is still occurring.
These effects contribute about equally to the suppression of anisotropies at small scales and give rise to the characteristic exponential damping tail seen in the very small angular scale anisotropies.
The depth of the LSS refers to the fact that the decoupling of the photons and baryons does not happen instantaneously, but instead requires an appreciable fraction of the age of the universe up to that era. One method of quantifying how long this process took uses the photon visibility function (PVF). This function is defined so that, denoting the PVF by P(t), the probability that a CMB photon last scattered between time t and is given by P(t)dt.
The maximum of the PVF (the time when it is most likely that a given CMB photon last scattered) is known quite precisely. The first-year WMAP results put the time at which P(t) has a maximum as 372,000 years. This is often taken as the "time" at which the CMB formed. However, to figure out how it took the photons and baryons to decouple, we need a measure of the width of the PVF. The WMAP team finds that the PVF is greater than half of its maximal value (the "full width at half maximum", or FWHM) over an interval of 115,000 years. By this measure, decoupling took place over roughly 115,000 years, and when it was complete, the universe was roughly 487,000 years old.
Late time anisotropy
Since the CMB came into existence, it has apparently been modified by several subsequent physical processes, which are collectively referred to as late-time anisotropy, or secondary anisotropy. When the CMB photons became free to travel unimpeded, ordinary matter in the universe was mostly in the form of neutral hydrogen and helium atoms. However, observations of galaxies today seem to indicate that most of the volume of the intergalactic medium (IGM) consists of ionized material (since there are few absorption lines due to hydrogen atoms). This implies a period of reionization during which some of the material of the universe was broken into hydrogen ions.
The CMB photons are scattered by free charges such as electrons that are not bound in atoms. In an ionized universe, such charged particles have been liberated from neutral atoms by ionizing (ultraviolet) radiation. Today these free charges are at sufficiently low density in most of the volume of the universe that they do not measurably affect the CMB. However, if the IGM was ionized at very early times when the universe was still denser, then there are two main effects on the CMB:
- Small scale anisotropies are erased. (Just as when looking at an object through fog, details of the object appear fuzzy.)
- The physics of how photons are scattered by free electrons (Thomson scattering) induces polarization anisotropies on large angular scales. This broad angle polarization is correlated with the broad angle temperature perturbation.
Both of these effects have been observed by the WMAP spacecraft, providing evidence that the universe was ionized at very early times, at a redshift more than 17. The detailed provenance of this early ionizing radiation is still a matter of scientific debate. It may have included starlight from the very first population of stars (population III stars), supernovae when these first stars reached the end of their lives, or the ionizing radiation produced by the accretion disks of massive black holes.
The time following the emission of the cosmic microwave background—and before the observation of the first stars—is semi-humorously referred to by cosmologists as the Dark Age, and is a period which is under intense study by astronomers (see 21 centimeter radiation).
Two other effects which occurred between reionization and our observations of the cosmic microwave background, and which appear to cause anisotropies, are the Sunyaev–Zel'dovich effect, where a cloud of high-energy electrons scatters the radiation, transferring some of its energy to the CMB photons, and the Sachs–Wolfe effect, which causes photons from the Cosmic Microwave Background to be gravitationally redshifted or blueshifted due to changing gravitational fields.
The cosmic microwave background is polarized at the level of a few microkelvin. There are two types of polarization, called E-modes and B-modes. This is in analogy to electrostatics, in which the electric field (E-field) has a vanishing curl and the magnetic field (B-field) has a vanishing divergence. The E-modes arise naturally from Thomson scattering in a heterogeneous plasma. The B-modes are not produced by standard scalar type perturbations. Instead they can be created by two mechanisms: the first one is by gravitational lensing of E-modes, which has been measured by the South Pole Telescope in 2013; the second one is from gravitational waves arising from cosmic inflation. Detecting the B-modes is extremely difficult, particularly as the degree of foreground contamination is unknown, and the weak gravitational lensing signal mixes the relatively strong E-mode signal with the B-mode signal.
E-modes were first seen in 2002 by the Degree Angular Scale Interferometer (DASI).
Cosmologists predict two types of B-modes, the first generated during cosmic inflation shortly after the big bang, and the second generated by gravitational lensing at later times.
Primordial gravitational waves
Primordial gravitational waves are gravitational waves that could be observed in the polarisation of the cosmic microwave background and having their origin in the early universe. Models of cosmic inflation predict that such gravitational waves should appear; thus, their detection supports the theory of inflation, and their strength can confirm and exclude different models of inflation. It is the result of three things: inflationary expansion of space itself, reheating after inflation, and turbulent fluid mixing of matter and radiation.
On 17 March 2014 it was announced that the BICEP2 instrument had detected the first type of B-modes, consistent with inflation and gravitational waves in the early universe at the level of, which is the amount of power present in gravitational waves compared to the amount of power present in other scalar density perturbations in the very early universe. Had this been confirmed it would have provided strong evidence for cosmic inflation and the Big Bang and against the ekpyrotic model of Paul Steinhardt and Neil Turok. However, on 19 June 2014, considerably lowered confidence in confirming the findings was reported and on 19 September 2014 new results of the Planck experiment reported that the results of BICEP2 can be fully attributed to cosmic dust.
The second type of B-modes was discovered in 2013 using the South Pole Telescope with help from the Herschel Space Observatory. In October 2014, a measurement of the B-mode polarization at 150 GHz was published by the POLARBEAR experiment. Compared to BICEP2, POLARBEAR focuses on a smaller patch of the sky and is less susceptible to dust effects. The team reported that POLARBEAR's measured B-mode polarization was of cosmological origin (and not just due to dust) at a 97.2% confidence level.
Microwave background observations
See main article: List of cosmic microwave background experiments. Subsequent to the discovery of the CMB, hundreds of cosmic microwave background experiments have been conducted to measure and characterize the signatures of the radiation. The most famous experiment is probably the NASA Cosmic Background Explorer (COBE) satellite that orbited in 1989–1996 and which detected and quantified the large scale anisotropies at the limit of its detection capabilities. Inspired by the initial COBE results of an extremely isotropic and homogeneous background, a series of ground- and balloon-based experiments quantified CMB anisotropies on smaller angular scales over the next decade. The primary goal of these experiments was to measure the angular scale of the first acoustic peak, for which COBE did not have sufficient resolution. These measurements were able to rule out cosmic strings as the leading theory of cosmic structure formation, and suggested cosmic inflation was the right theory. During the 1990s, the first peak was measured with increasing sensitivity and by 2000 the BOOMERanG experiment reported that the highest power fluctuations occur at scales of approximately one degree. Together with other cosmological data, these results implied that the geometry of the universe is flat. A number of ground-based interferometers provided measurements of the fluctuations with higher accuracy over the next three years, including the Very Small Array, Degree Angular Scale Interferometer (DASI), and the Cosmic Background Imager (CBI). DASI made the first detection of the polarization of the CMB and the CBI provided the first E-mode polarization spectrum with compelling evidence that it is out of phase with the T-mode spectrum.
All-sky mollweide map of the CMB, created from Planck spacecraft data
In June 2001, NASA launched a second CMB space mission, WMAP, to make much more precise measurements of the large scale anisotropies over the full sky. WMAP used symmetric, rapid-multi-modulated scanning, rapid switching radiometers to minimize non-sky signal noise. The first results from this mission, disclosed in 2003, were detailed measurements of the angular power spectrum at a scale of less than one degree, tightly constraining various cosmological parameters. The results are broadly consistent with those expected from cosmic inflation as well as various other competing theories, and are available in detail at NASA's data bank for Cosmic Microwave Background (CMB) (see links below). Although WMAP provided very accurate measurements of the large scale angular fluctuations in the CMB (structures about as broad in the sky as the moon), it did not have the angular resolution to measure the smaller scale fluctuations which had been observed by former ground-based interferometers.
A third space mission, the ESA (European Space Agency) Planck Surveyor, was launched in May 2009 and performed an even more detailed investigation until it was shut down in October 2013. Planck employed both HEMT radiometers and bolometer technology and measured the CMB at a smaller scale than WMAP. Its detectors were trialled in the Antarctic Viper telescope as ACBAR (Arcminute Cosmology Bolometer Array Receiver) experiment—which has produced the most precise measurements at small angular scales to date—and in the Archeops balloon telescope.
On 21 March 2013, the European-led research team behind the Planck cosmology probe released the mission's all-sky map (565x318 jpeg, 3600x1800 jpeg) of the cosmic microwave background. The map suggests the universe is slightly older than researchers expected. According to the map, subtle fluctuations in temperature were imprinted on the deep sky when the cosmos was about years old. The imprint reflects ripples that arose as early, in the existence of the universe, as the first nonillionth of a second. Apparently, these ripples gave rise to the present vast cosmic web of galaxy clusters and dark matter. Based on the 2013 data, the universe contains 4.9% ordinary matter, 26.8% dark matter and 68.3% dark energy. On 5 February 2015, new data was released by the Planck mission, according to which the age of the universe is billion years old and the Hubble constant was measured to be .
Additional ground-based instruments such as the South Pole Telescope in Antarctica and the proposed Clover Project, Atacama Cosmology Telescope and the QUIET telescope in Chile will provide additional data not available from satellite observations, possibly including the B-mode polarization.
Data reduction and analysis
Raw CMBR data, even from space vehicles such as WMAP or Planck, contain foreground effects that completely obscure the fine-scale structure of the cosmic microwave background. The fine-scale structure is superimposed on the raw CMBR data but is too small to be seen at the scale of the raw data. The most prominent of the foreground effects is the dipole anisotropy caused by the Sun's motion relative to the CMBR background. The dipole anisotropy and others due to Earth's annual motion relative to the Sun and numerous microwave sources in the galactic plane and elsewhere must be subtracted out to reveal the extremely tiny variations characterizing the fine-scale structure of the CMBR background.
The detailed analysis of CMBR data to produce maps, an angular power spectrum, and ultimately cosmological parameters is a complicated, computationally difficult problem. Although computing a power spectrum from a map is in principle a simple Fourier transform, decomposing the map of the sky into spherical harmonics, where the
term measures the mean temperature and
term accounts for the fluctuation, where the
refers to a spherical harmonic, and ℓ is the multipole number while m is the azimuthal number.
By applying the angular correlation function, the sum can be reduced to an expression that only involves ℓ and power spectrum term
The angled brackets indicate the average with respect to all observers in the universe; since the universe is homogenous and isotropic, therefore there is an absence of preferred observing direction. Thus, C is independent of m. Different choices of ℓ correspond to multipole moments of CMB.
In practice it is hard to take the effects of noise and foreground sources into account. In particular, these foregrounds are dominated by galactic emissions such as Bremsstrahlung, synchrotron, and dust that emit in the microwave band; in practice, the galaxy has to be removed, resulting in a CMB map that is not a full-sky map. In addition, point sources like galaxies and clusters represent another source of foreground which must be removed so as not to distort the short scale structure of the CMB power spectrum.
Constraints on many cosmological parameters can be obtained from their effects on the power spectrum, and results are often calculated using Markov chain Monte Carlo sampling techniques.
CMBR monopole term (ℓ = 0)
When ℓ = 0, the
term reduced to 1, and what we have left here is just the mean temperature of the CMB. This "mean" is called CMB monopole, and it is observed to have an average temperature of about Tγ = 2.7255 ± 0.0006K with one standard deviation confidence. The accuracy of this mean temperature may be impaired by the diverse measurements done by different mapping measurements. Such measurements demand absolute temperature devices, such as the FIRAS instrument on the COBE satellite. The measured kTγ is equivalent to 0.234 meV or 4.6 × 10−10 mec2. The photon number density of a blackbody having such temperature is . Its energy density is and the ratio to the critical density is Ωγ = 5.38 × 10−5.
CMBR dipole anisotropy (ℓ = 1)
CMB dipole represents the largest anisotropy, which is in the first spherical harmonic (ℓ = 1). When ℓ = 1, the
term reduces to one cosine function and thus encodes amplitude fluctuation. The amplitude of CMB dipole is around 3.3621 ± 0.0010 mK. Since the universe is presumed to be homogenous and isotropic, an observer should see the blackbody spectrum with temperature T at every point in the sky. The spectrum of the dipole has been confirmed to be the differential of a blackbody spectrum.
CMB dipole is frame-dependent. The CMB dipole moment could also be interpreted as the peculiar motion of the Earth toward the CMB. Its amplitude depends on the time due to the Earth's orbit about the barycenter of the solar system. This enables us to add a time-dependent term to the dipole expression. The modulation of this term is 1 year, which fits the observation done by COBE FIRAS. The dipole moment does not encode any primordial information.
From the CMB data, it is seen that the Sun appears to be moving at relative to the reference frame of the CMB (also called the CMB rest frame, or the frame of reference in which there is no motion through the CMB). The Local Group — the galaxy group that includes our own Milky Way galaxy — appears to be moving at in the direction of galactic longitude ℓ = , b = . This motion results in an anisotropy of the data (CMB appearing slightly warmer in the direction of movement than in the opposite direction). The standard interpretation of this temperature variation is a simple velocity redshift and blueshift due to motion relative to the CMB, but alternative cosmological models can explain some fraction of the observed dipole temperature distribution in the CMB.
Recenty study of Wide-field Infrared Survey Explorer questions the kinematic interpretation of CMB anisotropy with high statistical confidence.
Multipole (ℓ ≥ 2)
The temperature variation in the CMB temperature maps at higher multipoles, or ℓ ≥ 2, is considered to be the result of perturbations of the density in the early Universe, before the recombination epoch. Before recombination, the Universe consisted of a hot, dense plasma of electrons and baryons. In such a hot dense environment, electrons and protons could not form any neutral atoms. The baryons in such early Universe remained highly ionized and so were tightly coupled with photons through the effect of Thompson scattering. These phenomena caused the pressure and gravitational effects to act against each other, and triggered fluctuations in the photon-baryon plasma. Quickly after the recombination epoch, the rapid expansion of the universe caused the plasma to cool down and these fluctuations are "frozen into" the CMB maps we observe today. The said procedure happened at a redshift of around z ⋍ 1100.
See also: Cosmological principle, Axis of evil (cosmology) and CMB cold spot.
With the increasingly precise data provided by WMAP, there have been a number of claims that the CMB exhibits anomalies, such as very large scale anisotropies, anomalous alignments, and non-Gaussian distributions. The most longstanding of these is the low-ℓ multipole controversy. Even in the COBE map, it was observed that the quadrupole (ℓ = 2, spherical harmonic) has a low amplitude compared to the predictions of the Big Bang. In particular, the quadrupole and octupole (ℓ = 3) modes appear to have an unexplained alignment with each other and with both the ecliptic plane and equinoxes. A number of groups have suggested that this could be the signature of new physics at the greatest observable scales; other groups suspect systematic errors in the data.
Ultimately, due to the foregrounds and the cosmic variance problem, the greatest modes will never be as well measured as the small angular scale modes. The analyses were performed on two maps that have had the foregrounds removed as far as possible: the "internal linear combination" map of the WMAP collaboration and a similar map prepared by Max Tegmark and others. Later analyses have pointed out that these are the modes most susceptible to foreground contamination from synchrotron, dust, and Bremsstrahlung emission, and from experimental uncertainty in the monopole and dipole.
A full Bayesian analysis of the WMAP power spectrum demonstrates that the quadrupole prediction of Lambda-CDM cosmology is consistent with the data at the 10% level and that the observed octupole is not remarkable. Carefully accounting for the procedure used to remove the foregrounds from the full sky map further reduces the significance of the alignment by ~5%. Recent observations with the Planck telescope, which is very much more sensitive than WMAP and has a larger angular resolution, record the same anomaly, and so instrumental error (but not foreground contamination) appears to be ruled out. Coincidence is a possible explanation, chief scientist from WMAP, Charles L. Bennett suggested coincidence and human psychology were involved, "I do think there is a bit of a psychological effect; people want to find unusual things."
Assuming the universe keeps expanding and it does not suffer a Big Crunch, a Big Rip, or another similar fate, the cosmic microwave background will continue redshifting until it will no longer be detectable, and will be superseded first by the one produced by starlight, and perhaps, later by the background radiation fields of processes that may take place in the far future of the universe such as proton decay, evaporation of black holes, and positronium decay.
Timeline of prediction, discovery and interpretation
Thermal (non-microwave background) temperature predictions
- 1896 – Charles Édouard Guillaume estimates the "radiation of the stars" to be 5–6K.
- 1926 – Sir Arthur Eddington estimates the non-thermal radiation of starlight in the galaxy "... by the formula E = σT4 the effective temperature corresponding to this density is 3.18° absolute ... black body"
- 1930s – Cosmologist Erich Regener calculates that the non-thermal spectrum of cosmic rays in the galaxy has an effective temperature of 2.8 K
- 1931 – Term microwave first used in print: "When trials with wavelengths as low as 18 cm. were made known, there was undisguised surprise+that the problem of the micro-wave had been solved so soon." Telegraph & Telephone Journal XVII. 179/1
- 1934 – Richard Tolman shows that black-body radiation in an expanding universe cools but remains thermal
- 1938 – Nobel Prize winner (1920) Walther Nernst reestimates the cosmic ray temperature as 0.75K
- 1946 – Robert Dicke predicts "... radiation from cosmic matter" at <20 K, but did not refer to background radiation
- 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation.
- 1953 – Erwin Finlay-Freundlich in support of his tired light theory, derives a blackbody temperature for intergalactic space of 2.3K with comment from Max Born suggesting radio astronomy as the arbitrator between expanding and infinite cosmologies.
Microwave background radiation predictions and measurements
- 1941 – Andrew McKellar detected the cosmic microwave background as the coldest component of the interstellar medium by using the excitation of CN doublet lines measured by W. S. Adams in a B star, finding an "effective temperature of space" (the average bolometric temperature) of 2.3 K
- 1946 – George Gamow calculates a temperature of 50 K (assuming a 3-billion year old universe), commenting it "... is in reasonable agreement with the actual temperature of interstellar space", but does not mention background radiation.
- 1948 – Ralph Alpher and Robert Herman estimate "the temperature in the universe" at 5 K. Although they do not specifically mention microwave background radiation, it may be inferred.
- 1949 – Ralph Alpher and Robert Herman re-re-estimate the temperature at 28 K.
- 1953 – George Gamow estimates 7 K.
- 1956 – George Gamow estimates 6 K.
- 1955 – Émile Le Roux of the Nançay Radio Observatory, in a sky survey at λ = 33 cm, reported a near-isotropic background radiation of 3 kelvins, plus or minus 2.
- 1957 – Tigran Shmaonov reports that "the absolute effective temperature of the radioemission background ... is 4±3 K". It is noted that the "measurements showed that radiation intensity was independent of either time or direction of observation ... it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2 cm"
- 1960s – Robert Dicke re-estimates a microwave background radiation temperature of 40 K
- 1964 – A. G. Doroshkevich and Igor Dmitrievich Novikov publish a brief paper suggesting microwave searches for the black-body radiation predicted by Gamow, Alpher, and Herman, where they name the CMB radiation phenomenon as detectable.
- 1964–65 – Arno Penzias and Robert Woodrow Wilson measure the temperature to be approximately 3 K. Robert Dicke, James Peebles, P. G. Roll, and D. T. Wilkinson interpret this radiation as a signature of the big bang.
- 1966 – Rainer K. Sachs and Arthur M. Wolfe theoretically predict microwave background fluctuation amplitudes created by gravitational potential variations between observers and the last scattering surface (see Sachs-Wolfe effect)
- 1968 – Martin Rees and Dennis Sciama theoretically predict microwave background fluctuation amplitudes created by photons traversing time-dependent potential wells
- 1969 – R. A. Sunyaev and Yakov Zel'dovich study the inverse Compton scattering of microwave background photons by hot electrons (see Sunyaev–Zel'dovich effect)
- 1983 – Researchers from the Cambridge Radio Astronomy Group and the Owens Valley Radio Observatory first detect the Sunyaev–Zel'dovich effect from clusters of galaxies
- 1983 – RELIKT-1 Soviet CMB anisotropy experiment was launched.
- 1990 – FIRAS on the Cosmic Background Explorer (COBE) satellite measures the black body form of the CMB spectrum with exquisite precision, and shows that the microwave background has a nearly perfect black-body spectrum and thereby strongly constrains the density of the intergalactic medium.
- January 1992 – Scientists that analysed data from the RELIKT-1 report the discovery of anisotropy in the cosmic microwave background at the Moscow astrophysical seminar.
- 1992 – Scientists that analysed data from COBE DMR report the discovery of anisotropy in the cosmic microwave background.
- 1995 – The Cosmic Anisotropy Telescope performs the first high resolution observations of the cosmic microwave background.
- 1999 – First measurements of acoustic oscillations in the CMB anisotropy angular power spectrum from the TOCO, BOOMERANG, and Maxima Experiments. The BOOMERanG experiment makes higher quality maps at intermediate resolution, and confirms that the universe is "flat".
- 2002 – Polarization discovered by DASI.
- 2003 – E-mode polarization spectrum obtained by the CBI. The CBI and the Very Small Array produces yet higher quality maps at high resolution (covering small areas of the sky).
- 2003 – The Wilkinson Microwave Anisotropy Probe spacecraft produces an even higher quality map at low and intermediate resolution of the whole sky (WMAP provides high-resolution data, but improves on the intermediate resolution maps from BOOMERanG).
- 2004 – E-mode polarization spectrum obtained by the CBI.
- 2004 – The Arcminute Cosmology Bolometer Array Receiver produces a higher quality map of the high resolution structure not mapped by WMAP.
- 2005 – The Arcminute Microkelvin Imager and the Sunyaev–Zel'dovich Array begin the first surveys for very high redshift clusters of galaxies using the Sunyaev–Zel'dovich effect.
- 2005 – Ralph A. Alpher is awarded the National Medal of Science for his groundbreaking work in nucleosynthesis and prediction that the universe expansion leaves behind background radiation, thus providing a model for the Big Bang theory.
- 2006 – The long-awaited three-year WMAP results are released, confirming previous analysis, correcting several points, and including polarization data.
- 2006 – Two of COBE's principal investigators, George Smoot and John Mather, received the Nobel Prize in Physics in 2006 for their work on precision measurement of the CMBR.
- 2006–2011 – Improved measurements from WMAP, new supernova surveys ESSENCE and SNLS, and baryon acoustic oscillations from SDSS and WiggleZ, continue to be consistent with the standard Lambda-CDM model.
- 2010 – The first all-sky map from the Planck telescope is released.
- 2013 – An improved all-sky map from the Planck telescope is released, improving the measurements of WMAP and extending them to much smaller scales.
- 2014 – On March 17, 2014, astrophysicists of the BICEP2 collaboration announced the detection of inflationary gravitational waves in the B-mode power spectrum, which if confirmed, would provide clear experimental evidence for the theory of inflation. However, on 19 June 2014, lowered confidence in confirming the cosmic inflation findings was reported.
- 2015 – On January 30, 2015, the same team of astronomers from BICEP2 withdrew the claim made on the previous year. Based on the combined data of BICEP2 and Planck, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way.
- 2018 – The final data and maps from the Planck telescope is released, with improved measurements of the polarization on large scales.
- 2019 – Planck telescope analyses of their final 2018 data continue to be released.
In popular culture
- In the Stargate Universe TV series (2009-2011), an Ancient spaceship, Destiny, was built to study patterns in the CMBR which indicate that the universe as we know it might have been created by some form of sentient intelligence.
- In Wheelers, a novel (2000) by Ian Stewart & Jack Cohen, CMBR is explained as the encrypted transmissions of an ancient civilization. This allows the Jovian "blimps" to have a society older than the currently-observed age of the universe.
- In The Three-Body Problem, a 2008 novel by Liu Cixin, a probe from an alien civilization compromises instruments monitoring the CMBR in order to deceive a character into believing the civilization has the power to manipulate the CMBR itself.
- The 2017 issue of the Swiss 20 francs bill lists several astronomical objects with their distances – the CMB is mentioned with 430 · 1015 light-seconds.
- In the 2021 Marvel series WandaVision, a mysterious television broadcast is discovered within the Cosmic Microwave Background.
- Book: Balbi. Amedeo. The music of the big bang : the cosmic microwave background and the new cosmology. 2008. Springer. Berlin. 978-3540787266.
- Book: Evans. Rhodri. The Cosmic Microwave Background: How It Changed Our Understanding of the Universe. Springer. 9783319099279. 2015 . en.
Notes and References
- Book: Sunyaev, R. A. . The thermal history of the universe and the spectrum of relic radiation . Rashid Sunyaev . Confrontation of Cosmological Theories with Observational Data . IAUS . 167–173 . 1974 . 63 . Longair . M. S. . Springer . Dordrecht . 10.1007/978-94-010-2220-0_14 . 1974IAUS...63..167S . 978-90-277-0457-3 .
- Penzias . A. A. . Wilson. R. W. . 1965 . A Measurement of Excess Antenna Temperature at 4080 Mc/s . . 142 . 1 . 419–421 . 1965ApJ...142..419P . 10.1086/148307.
- Web site: Smoot Group . 28 March 1996 . The Cosmic Microwave Background Radiation . . 2008-12-11.
- Kaku . M. . Michio Kaku . 2014 . First Second of the Big Bang . . 3 . 4 . Discovery Science.
- Fixsen . D. J. . 2009 . The Temperature of the Cosmic Microwave Background . . 707 . 2 . 916–920 . 0911.1955 . 2009ApJ...707..916F . 10.1088/0004-637X/707/2/916. 119217397 .
- Dodelson . S. . 2003 . Coherent Phase Argument for Inflation . . 689 . 184–196 . hep-ph/0309057 . 2003AIPC..689..184D . 10.1063/1.1627736. 10.1.1.344.3524 . 18570203 .
- Web site: Baumann . D. . 2011 . The Physics of Inflation . . 2015-05-09 . https://web.archive.org/web/20180921195002/http://www.damtp.cam.ac.uk/user/db275/TEACHING/INFLATION/Lectures.pdf . 2018-09-21 . dead .
- Chluba. J.. etal. New Horizons in Cosmology with Spectral Distortions of the Cosmic Microwave Background. Voyage 2050 Proposals. 2021. 51. 3. 1515–1554. 10.1007/s10686-021-09729-5. 1909.01593. 2021ExA....51.1515C. 202539910.
- Book: Wright, E.L.. 2004. Theoretical Overview of Cosmic Microwave Background Anisotropy. W. L. Freedman. Measuring and Modeling the Universe. Carnegie Observatories Astrophysics Series. Cambridge University Press. 291. 978-0-521-75576-4. astro-ph/0305591 . 2004mmu..symp..291W .
- Book: Guth, A. H.. 1998. The Inflationary Universe: The Quest for a New Theory of Cosmic Origins. 186. Basic Books. 978-0201328400. 35701222.
- Cirigliano. D.. de Vega. H.J.. Sanchez. N. G.. 2005. Clarifying inflation models: The precise inflationary potential from effective field theory and the WMAP data. Physical Review D. 71. 10. 77–115. 10.1103/PhysRevD.71.103518. astro-ph/0412634. 2005PhRvD..71j3518C. 36572996. Submitted manuscript.
- Web site: Abbott . B. . 2007 . Microwave (WMAP) All-Sky Survey . . 2008-01-13 . dead . https://web.archive.org/web/20130213023246/http://www.haydenplanetarium.org/universe/duguide/exgg_wmap.php . 2013-02-13 .
- Gawiser. E.. Silk. J.. 2000. The cosmic microwave background radiation. Physics Reports. 333–334. 2000. 245–267. 10.1016/S0370-1573(00)00025-9. astro-ph/0002044. 2000PhR...333..245G . 10.1.1.588.3349. 15398837.
- Web site: Smoot. G. F.. 2006. Cosmic Microwave Background Radiation Anisotropies: Their Discovery and Utilization. Nobel Lecture. Nobel Foundation. 2008-12-22.
- Book: Hobson. M.P.. Efstathiou. G.. Lasenby. A.N.. 2006. General Relativity: An Introduction for Physicists. limited. 388. Cambridge University Press. 978-0-521-82951-9.
- Book: Unsöld. A.. Bodo. B.. 2002. The New Cosmos, An Introduction to Astronomy and Astrophysics. 5th. 485. Springer-Verlag. 978-3-540-67877-9. 2001ncia.book.....U.
- White. M.. 1999. Anisotropies in the CMB. Proceedings of the Los Angeles Meeting, DPF 99. UCLA. astro-ph/9903232 . 1999dpf..conf.....W .
- Web site: 29. Cosmic Microwave Background: Particle Data Group P.A. Zyla (LBL, Berkeley) et al..
- Gamow. G.. 1948. The Origin of Elements and the Separation of Galaxies. Physical Review. 74. 4. 505–506. 10.1103/PhysRev.74.505.2. 1948PhRv...74..505G .
- Gamow. G.. 4793163. 1948. The evolution of the universe. Nature. 162. 680–682. 10.1038/162680a0. 18893719. 1948Natur.162..680G. 4122.
- Alpher. R. A.. Herman. R. C.. 1948. On the Relative Abundance of the Elements. Physical Review. 74. 12. 1737–1742. 10.1103/PhysRev.74.1737. 1948PhRv...74.1737A .
- Alpher. R. A.. Herman. R. C.. 4113488. 1948. Evolution of the Universe. Nature. 162. 4124. 774–775. 10.1038/162774b0. 1948Natur.162..774A.
- Assis. A. K. T.. Neves. M. C. D.. 1995. History of the 2.7 K Temperature Prior to Penzias and Wilson. 3. 79–87. but see also Web site: Wright. E. L.. 2006. Eddington's Temperature of Space. UCLA. 2008-12-11.
- Penzias. A. A.. 2006. The origin of elements. Science. 205. 4406. 549–54. Nobel Foundation. 10.1126/science.205.4406.549. 17729659. 2006-10-04.
- Dicke. R. H.. 1946. The Measurement of Thermal Radiation at Microwave Frequencies. Review of Scientific Instruments. 17. 268–275. 10.1063/1.1770483. 20991753. 1946RScI...17..268D. 7 . This basic design for a radiometer has been used in most subsequent cosmic microwave background experiments.
- Web site: The Cosmic Microwave Background Radiation (Nobel Lecture) by Robert Wilson 8 Dec 1978, p. 474.
- Dicke. R. H.. 1965. Cosmic Black-Body Radiation. Astrophysical Journal. 142. 414–419. 10.1086/148306. 1965ApJ...142..414D. etal.
- The history is given in Book: Peebles, P. J. E. 1993. Principles of Physical Cosmology. 139–148. Princeton University Press. 978-0-691-01933-8.
- Web site: 1978. The Nobel Prize in Physics 1978. Nobel Foundation. 2009-01-08.
- Narlikar. J. V.. Wickramasinghe. N. C.. 1967. Microwave Background in a Steady State Universe. Nature. 216. 43–44. 10.1038/216043a0. 1967Natur.216...43N. 5110. 11007/945. 4199874.
- Peebles. P. J. E.. 4337502. 1991. The case for the relativistic hot big bang cosmology. Nature. 352. 769–776. 10.1038/352769a0. 1991Natur.352..769P. 6338. etal.
- Harrison. E. R.. 1970. Fluctuations at the threshold of classical cosmology. Physical Review D. 1. 2726–2730. 10.1103/PhysRevD.1.2726. 1970PhRvD...1.2726H. 10 .
- Peebles. P. J. E.. Yu. J. T.. 1970. Primeval Adiabatic Perturbation in an Expanding Universe. Astrophysical Journal. 162. 815–836. 10.1086/150713. 1970ApJ...162..815P.
- Zeldovich. Y. B.. 1972. A hypothesis, unifying the structure and the entropy of the Universe. Monthly Notices of the Royal Astronomical Society. 160. 1P–4P. 10.1016/S0026-0576(07)80178-4. 7–8. 1972MNRAS.160P...1Z.
- Doroshkevich. A. G. . Zel'Dovich. Y. B. . Syunyaev. R. A. . 12–16 September 1977 . 1978 . Fluctuations of the microwave background radiation in the adiabatic and entropic theories of galaxy formation . Longair, M. S. . Einasto, J. . The large scale structure of the universe; Proceedings of the Symposium . Dordrecht, D. Reidel Publishing Co. . 393–404 . Tallinn, Estonian SSR . 1978IAUS...79..393S . While this is the first paper to discuss the detailed observational imprint of density inhomogeneities as anisotropies in the cosmic microwave background, some of the groundwork was laid in Peebles and Yu, above.
- Smoot. G. F.. 1992. Structure in the COBE differential microwave radiometer first-year maps. Astrophysical Journal Letters. 396. 1. L1–L5. 10.1086/186504. 1992ApJ...396L...1S. etal.
- Bennett. C.L.. 1996. Four-Year COBE DMR Cosmic Microwave Background Observations: Maps and Basic Results. Astrophysical Journal Letters. 464. L1–L4. 10.1086/310075. 1996ApJ...464L...1B. astro-ph/9601067 . 18144842. etal.
- Book: Grupen, C. . 2005. Astroparticle Physics. 240–241. Springer. 978-3-540-25312-9. etal.
- Miller. A. D.. 1999. A Measurement of the Angular Power Spectrum of the Microwave Background Made from the High Chilean Andes. Astrophysical Journal. 521. 2. L79–L82. 10.1086/312197. 1999ApJ...521L..79T. astro-ph/9905100 . 16534514. etal.
- Melchiorri. A.. 2000. A Measurement of Ω from the North American Test Flight of Boomerang. The Astrophysical Journal Letters. 536. 2. L63–L66. 10.1086/312744. 10859119. 2000ApJ...536L..63M. astro-ph/9911445 . 27518923. etal.
- Hanany. S.. 2000. MAXIMA-1: A Measurement of the Cosmic Microwave Background Anisotropy on Angular Scales of 10'–5°. Astrophysical Journal. 545. 1. L5–L9. 10.1086/317322. 2000ApJ...545L...5H. astro-ph/0005123 . 119495132. etal.
- de Bernardis. P.. 2000. A flat Universe from high-resolution maps of the cosmic microwave background radiation. Nature. 404. 10801117. 6781. 955–959. 2000Natur.404..955D. 10.1038/35010035. astro-ph/0004404 . etal. 10044/1/60851. 4412370.
- Pogosian. L.. Levon Pogosian. 2003. Observational constraints on cosmic string production during brane inflation. Physical Review D. 68. 2. 023506. 10.1103/PhysRevD.68.023506. hep-th/0304188 . 2003PhRvD..68b3506P . etal.
- Scott. D.. 2005. The Standard Cosmological Model. astro-ph/0510731. 10.1139/P06-066. 84. 6–7. Canadian Journal of Physics. 419–435. 2006CaJPh..84..419S . 10.1.1.317.2954. 15606491.
- Book: Durham, Frank. Purrington, Robert D.. Frame of the universe: a history of physical cosmology. registration. Columbia University Press. 1983. 978-0-231-05393-8. 193–209.
- Web site: Converted number: Conversion from K to eV.
- astro-ph/9508159. Fixsen. D. J.. Formation of Structure in the Universe. 1995.
- Noterdaeme, P. . Petitjean, P. . Srianand, R. . Ledoux, C. . López, S. . The evolution of the cosmic microwave background temperature. Measurements of TCMB at high redshift from carbon monoxide excitation . Astronomy and Astrophysics . 526 . February 2011 . 10.1051/0004-6361/201016140 . 2011A&A...526L...7N . 1012.3164 . L7 . 118485014 .
- Web site: Baryons and Inertia . Wayne Hu.
- Web site: Radiation Driving Force . Wayne Hu.
- Hu . W.. White. M.. 1996. Acoustic Signatures in the Cosmic Microwave Background. Astrophysical Journal. 471. 30–51. 10.1086/177951. 1996ApJ...471...30H. astro-ph/9602019 . 8791666.
- WMAP Collaboration. 2003. First-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Determination of Cosmological Parameters. Astrophysical Journal Supplement Series. 148. Verde. 1. 175–194. L.. 10.1086/377226. Peiris. H. V.. Komatsu. E.. Nolta. M. R.. Bennett. C. L.. Halpern. M.. Hinshaw. G.. Jarosik. N.. astro-ph/0302209. 2003ApJS..148..175S. 10794058. 8.
- Hanson. D.. 2013. Detection of B-mode polarization in the Cosmic Microwave Background with data from the South Pole Telescope. Physical Review Letters. 111. 14. 141301. 10.1103/PhysRevLett.111.141301. 24138230. 1307.5830. 2013PhRvL.111n1301H . 9437637. etal.
- Lewis. A.. Challinor. A.. 2006. Weak gravitational lensing of the CMB. Physics Reports. 429. 1. 1–65. 10.1016/j.physrep.2006.03.002. astro-ph/0601594. 2006PhR...429....1L . 1731891.
- U.. Seljak. Measuring Polarization in the Cosmic Microwave Background. Astrophysical Journal. June 1997. 482. 1. 6–16. 10.1086/304123. astro-ph/9608131 . 1997ApJ...482....6S . 16825580.
- U.. Seljak. Zaldarriaga M.. Signature of Gravity Waves in the Polarization of the Microwave Background. Phys. Rev. Lett.. March 17, 1997. 78. 11. 10.1103/PhysRevLett.78.2054. astro-ph/9609169 . 1997PhRvL..78.2054S. 2054–2057. 30795875.
- M.. Kamionkowski. Kosowsky A.. Stebbins A.. amp. A Probe of Primordial Gravity Waves and Vorticity. Phys. Rev. Lett.. 1997. 78. 11. 10.1103/PhysRevLett.78.2058. astro-ph/9609132 . 1997PhRvL..78.2058K. 2058–2061. 17330375.
- M.. Zaldarriaga. Seljak U.. Gravitational lensing effect on cosmic microwave background polarization. Physical Review D. July 15, 1998. 58. 2. 023003. 2. 10.1103/PhysRevD.58.023003. astro-ph/9803150 . 1998PhRvD..58b3003Z . 119512504.
- Web site: Scientists Report Evidence for Gravitational Waves in Early Universe. 2007-06-20. 2014-03-17.
- News: Gravitational waves: have US scientists heard echoes of the big bang? . . 2014-03-14 . 2014-03-14.
- News: Space Ripples Reveal Big Bang's Smoking Gun. The New York Times. March 17, 2014. Overbye. Dennis.
- Book: Steinhardt, Paul J.. Endless universe : beyond the Big Bang. 2007. Weidenfeld & Nicolson. 978-0-297-84554-6. 271843490.
- Ade, P.A.R. (BICEP2 Collaboration) . Detection of B-Mode Polarization at Degree Angular Scales by BICEP2 . 2014 . . 112 . 24 . 241101 . 10.1103/PhysRevLett.112.241101 . 24996078. 1403.3985 . 2014PhRvL.112x1101B . 22780831 .
- Planck Collaboration Team . Planck intermediate results. XXX. The angular power spectrum of polarized dust emission at intermediate and high Galactic latitudes . 9 February 2016 . 1409.5738 . Astronomy & Astrophysics . 586 . 133 . 10.1051/0004-6361/201425034 . A133 . 2016A&A...586A.133P. 9857299 .
- Polarization detected in Big Bang's echo. Nature. 10.1038/nature.2013.13441. 2013. Samuel Reich. Eugenie. 211730550.
- Web site: Clavin . Whitney . Harrington . J.D. . Planck Mission Brings Universe Into Sharp Focus . 21 March 2013. . 21 March 2013 .
- Web site: Staff . Mapping the Early Universe . 21 March 2013 . . 23 March 2013 .
- Planck Collaboration . 2016 . Planck 2015 results. XIII. Cosmological parameters (See Table 4 on page 31 of pfd) . 1502.01589. 2016A&A...594A..13P . 10.1051/0004-6361/201525830 . 594 . 13 . Astronomy & Astrophysics . A13. 119262962 .
- P.A. Zyla et al. (Particle Data Group) . Review of Particle Physics . Progress of Theoretical and Experimental Physics . 2020 . 2020 . 8 . 083C01 . 10.1093/ptep/ptaa104 . free . Cosmic Microwave Background review by Scott and Smoot.
- Web site: COBE Differential Microwave Radiometers: Calibration Techniques.. Bennett. C..
- Dipole Modulation of Cosmic Microwave Background Temperature and Polarization.. Shosh. S.. Journal of Cosmology and Astroparticle Physics. 2016. 2016 . 1. 046. 10.1088/1475-7516/2016/01/046 . 1507.04078 . 2016JCAP...01..046G . 118553819.
- A Test of the Cosmological Principle with Quasars. The Astrophysical Journal Letters. 2021. 10.3847/2041-8213/abdd40. Secrest. Nathan J.. Hausegger. Sebastian von. Rameez. Mohamed. Mohayaee. Roya. Sarkar. Subir. Colin. Jacques. 908. 2. L51. 2009.14826. 2021ApJ...908L..51S. 222066749.
- Rossmanith . G. . 2009 . Non-Gaussian Signatures in the five-year WMAP data as identified with isotropic scaling indices . 10.1111/j.1365-2966.2009.15421.x. Monthly Notices of the Royal Astronomical Society. 399. 4. 1921–1933. 0905.2854. 2009MNRAS.399.1921R . Räth . C. . Banday. A. J.. Morfill. G. . 11586058 .
- Bernui. A. . 2007 . Mapping the large-scale anisotropy in the WMAP data. 10.1051/0004-6361:20065585 . Astronomy and Astrophysics. 464 . 2 . 479–485 . astro-ph/0511666 . 2007A&A...464..479B . Mota . B. . Rebouças. M. J.. Tavakol. R.. 16138962 .
- Jaffe. T.R.. 2005. Evidence of vorticity and shear at large angular scales in the WMAP data: a violation of cosmological isotropy?. 10.1086/444454. The Astrophysical Journal. 629 . 1. L1–L4. astro-ph/0503213. 2005ApJ...629L...1J . Banday . A. J. . Eriksen. H. K. . Górski. K. M.. Hansen. F. K.. 15521559.
- de Oliveira-Costa . A. . 2004 . The significance of the largest scale CMB fluctuations in WMAP. Physical Review D. 69 . 063516 . 10.1103/PhysRevD.69.063516 . astro-ph/0307282 . 2004PhRvD..69f3516D . 6 . Tegmark . Max . Zaldarriaga . Matias . Hamilton . Andrew. 119463060 . Submitted manuscript .
- Schwarz. D. J.. 2004. Is the low-ℓ microwave background cosmic?. Physical Review Letters. 93. 221301. 10.1103/PhysRevLett.93.221301. 15601079. astro-ph/0403353. 2004PhRvL..93v1301S . 22 . Starkman . Glenn D. . Huterer. Dragan. Copi. Craig. 12554281. 2. Submitted manuscript.
- Bielewicz. P.. Gorski. K. M.. Banday. A. J.. 2004 . Low-order multipole maps of CMB anisotropy derived from WMAP. . 355. 1283–1302 . 10.1111/j.1365-2966.2004.08405.x . astro-ph/0405007 . 2004MNRAS.355.1283B . 4 . 5564564.
- Liu. Hao. Li. Ti-Pei. 2009. Improved CMB Map from WMAP Data. astro-ph . 0907.2731v3.
- Sawangwit. Utane. Shanks. Tom. 2010. Lambda-CDM and the WMAP Power Spectrum Beam Profile Sensitivity. astro-ph . 1006.1270v1.
- Liu. Hao. 2010. Diagnosing Timing Error in WMAP Data . Monthly Notices of the Royal Astronomical Society. 413 . 1. L96–L100. 1009.2701v1 . etal . 2011MNRAS.413L..96L. 10.1111/j.1745-3933.2011.01041.x . 118739762.
- Hinshaw. G.. (WMAP collaboration). 2007. Three-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: temperature analysis. Astrophysical Journal Supplement Series. 170. 2. 288–334. 10.1086/513698. astro-ph/0603451. 2007ApJS..170..288H. Bennett. C. L.. Bean. R.. Rachel Bean. Doré. O.. Greason. M. R.. Halpern. M.. Hill. R. S.. Jarosik. N.. Kogut. A.. Komatsu. E.. Limon. M.. Odegard. N.. Meyer. S. S.. Page . L. . Peiris . H. V.. Spergel. D. N.. Tucker. G. S.. Verde. L.. Weiland. J. L.. Wollack. E.. Wright. E. L.. etal. 10.1.1.471.7186. 15554608.
- Bennett. C. L.. (WMAP collaboration). 2003. First-year Wilkinson Microwave Anisotropy Probe (WMAP) observations: preliminary maps and basic results. Astrophysical Journal Supplement Series. 148. 1. 1–27. 10.1086/377253. astro-ph/0302207. 2003ApJS..148....1B. Hinshaw. G.. Jarosik. N.. Kogut. A.. Limon. M.. Meyer. S. S.. Page. L.. Spergel. D. N.. Tucker. G. S.. Wollack . E.. Wright. E. L.. Barnes. C.. Greason. M. R.. Hill. R. S.. Komatsu . E.. Nolta. M. R.. Odegard. N.. Peiris. H. V.. Verde. L.. Weiland. J. L.. 115601. etal. This paper warns, "the statistics of this internal linear combination map are complex and inappropriate for most CMB analyses."
- Tegmark. M.. de Oliveira-Costa. A.. Hamilton. A.. 2003. A high resolution foreground cleaned CMB map from WMAP. Physical Review D. 68. 123523. 10.1103/PhysRevD.68.123523. astro-ph/0302496. 2003PhRvD..68l3523T. 12 . 17981329. This paper states, "Not surprisingly, the two most contaminated multipoles are [the quadrupole and octupole], which most closely trace the galactic plane morphology."
- O'Dwyer. I.. 2004. Bayesian Power Spectrum Analysis of the First-Year Wilkinson Microwave Anisotropy Probe Data. Astrophysical Journal Letters. 617. L99–L102. 10.1086/427386. astro-ph/0407027. 2004ApJ...617L..99O. 2. Eriksen . H. K. . Wandelt. B. D.. Jewell. J. B.. Larson. D. L.. Górski. K. M.. Banday. A. J.. Levin. S.. Lilje. P. B. .
- Slosar. A.. Seljak. U.. 2004. Assessing the effects of foregrounds and sky removal in WMAP. Physical Review D. 70. 083002. 10.1103/PhysRevD.70.083002. astro-ph/0404567. 2004PhRvD..70h3002S. 8. 119443655. Submitted manuscript.
- Bielewicz. P.. 2005. Multipole vector anomalies in the first-year WMAP data: a cut-sky analysis. Astrophysical Journal. 635. 750–60. 10.1086/497263. astro-ph/0507186. 2005ApJ...635..750B. 2. Eriksen . H. K. . Banday. A. J.. Górski. K. M.. Lilje. P. B. . 1103733.
- Copi . C.J. . 2006. On the large-angle anomalies of the microwave sky. Monthly Notices of the Royal Astronomical Society. 367. 1 . 79–102. 10.1111/j.1365-2966.2005.09980.x. astro-ph/0508047 . 2006MNRAS.367...79C. Huterer . Dragan . Schwarz . D. J. . Starkman . G. D. . 10.1.1.490.6391 . 6184966 .
- de Oliveira-Costa. A.. Tegmark. M.. 2006. CMB multipole measurements in the presence of foregrounds. Physical Review D. 74. 023005. 10.1103/PhysRevD.74.023005. astro-ph/0603369. 2006PhRvD..74b3005D. 2. 5238226. Submitted manuscript.
- Web site: Planck shows almost perfect cosmos – plus axis of evil.
- Web site: Found: Hawking's initials written into the universe.
- 2007GReGr..39.1545K . 10.1007/s10714-007-0472-9 . The return of a static universe and the end of cosmology . 2007 . Krauss . Lawrence M. . Scherrer . Robert J. . General Relativity and Gravitation . 39 . 10 . 1545–1550. 0704.0221 . 123442313 .
- 1997RvMP...69..337A . astro-ph/9701131 . 10.1103/RevModPhys.69.337 . A dying universe: The long-term fate and evolution of astrophysical objects . 1997 . Adams . Fred C. . Laughlin . Gregory . . 69 . 2 . 337–372. 12173790 .
- Guillaume, C.-É., 1896, La Nature 24, series 2, p. 234, cited in "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF)
- Eddington, A., The Internal Constitution of the Stars, cited in "History of the 2.7 K Temperature Prior to Penzias and Wilson" (PDF)
- Book: Kragh, H.. 1999. Cosmology and Controversy: The Historical Development of Two Theories of the Universe. Princeton University Press. registration. 135. 978-0-691-00546-1. "In 1946, Robert Dicke and coworkers at MIT tested equipment that could test a cosmic microwave background of intensity corresponding to about 20K in the microwave region. However, they did not refer to such a background, but only to 'radiation from cosmic matter'. Also, this work was unrelated to cosmology and is only mentioned because it suggests that by 1950, detection of the background radiation might have been technically possible, and also because of Dicke's later role in the discovery". See also Dicke. R. H.. 1946. Atmospheric Absorption Measurements with a Microwave Radiometer. Physical Review. 70. 5–6. 340–348. 10.1103/PhysRev.70.340. 1946PhRv...70..340D . etal.
- George Gamow, The Creation Of The Universe p.50 (Dover reprint of revised 1961 edition)
- Book: Gamow, G.. George Gamow. 2004. 1961. Cosmology and Controversy: The Historical Development of Two Theories of the Universe. 40. Courier Dover Publications. 978-0-486-43868-9.
- Erwin Finlay-Freundlich, "Ueber die Rotverschiebung der Spektrallinien" (1953) Contributions from the Observatory, University of St. Andrews; no. 4, p. 96–102. Finlay-Freundlich gave two extreme values of 1.9K and 6.0K in Finlay-Freundlich, E.: 1954, "Red shifts in the spectra of celestial bodies", Phil. Mag., Vol. 45, pp. 303–319.
- McKellar. A.. Molecular Lines from the Lowest States of Diatomic Molecules Composed of Atoms Probably Present in Interstellar Space. Publications of the Dominion Astrophysical Observatory. Vancouver, B.C., Canada. 1941. 7. 251–272. 6. 1941PDAO....7..251M .
- Book: Weinberg, S.. Steven Weinberg. 1972. Oxford Astronomy Encyclopedia. 514. John Wiley & Sons. 978-0-471-92567-5.
- [Helge Kragh]
- Shmaonov. T. A.. 1957. Commentary. ru. Pribory I Tekhnika Experimenta. 1. 83. 10.1016/S0890-5096(06)60772-3.
- It is noted that the "measurements showed that radiation intensity was independent of either time or direction of observation ... it is now clear that Shmaonov did observe the cosmic microwave background at a wavelength of 3.2cm"
- Book: Naselsky. P. D.. Novikov. D.I.. Novikov. I. D.. 2006. The Physics of the Cosmic Microwave Background. 978-0-521-85550-1.
- Book: Helge Kragh. Cosmology and Controversy: The Historical Development of Two Theories of the Universe. registration. 1999. Princeton University Press. 978-0-691-00546-1.
- Doroshkevich. A. G.. Novikov. I.D.. 96773397. 1964. Mean Density of Radiation in the Metagalaxy and Certain Problems in Relativistic Cosmology. Soviet Physics Doklady. 9. 4292–4298. 10.1021/es990537g. 23. 1999EnST...33.4292W .
- Nobel Prize In Physics: Russia's Missed Opportunities, RIA Novosti, Nov 21, 2006
- News: Sanders. R.. Kahn, J.. 13 October 2006. UC Berkeley, LBNL cosmologist George F. Smoot awarded 2006 Nobel Prize in Physics. UC Berkeley News. 2008-12-11.
- Kovac. J.M.. 2002. Detection of polarization in the cosmic microwave background using DASI. Nature. 12490941. 420. 6917. 772–787. 10.1038/nature01269. astro-ph/0209478 . 2002Natur.420..772K . 4359884. etal. Submitted manuscript.
- Readhead. A. C. S.. 2004. Polarization Observations with the Cosmic Background Imager. Science. 15472038. 306. 5697. 836–844. 10.1126/science.1105598. 2004Sci...306..836R. astro-ph/0409569 . 9234000. etal.
- A. Readhead et al., "Polarization observations with the Cosmic Background Imager", Science 306, 836–844 (2004).
- Web site: Clavin . Whitney . NASA Technology Views Birth of the Universe . March 17, 2014 . . March 17, 2014 .
- News: Overbye . Dennis . Dennis Overbye . Space Ripples Reveal Big Bang's Smoking Gun . March 17, 2014 . . March 17, 2014 .
- News: Overbye . Dennis . Dennis Overbye . Ripples From the Big Bang . https://ghostarchive.org/archive/20220101/https://www.nytimes.com/2014/03/25/science/space/ripples-from-the-big-bang.html . 2022-01-01 . limited . March 24, 2014 . . March 24, 2014 .
- Web site: BICEP2 News Not Even Wrong.
- News: Overbye . Dennis . Dennis Overbye . Astronomers Hedge on Big Bang Detection Claim . https://ghostarchive.org/archive/20220101/https://www.nytimes.com/2014/06/20/science/space/scientists-debate-gravity-wave-detection-claim.html . 2022-01-01 . limited . June 19, 2014 . . June 20, 2014 .
- News: Amos . Jonathan . Cosmic inflation: Confidence lowered for Big Bang signal . June 19, 2014 . . June 20, 2014 .
- Gravitational waves discovery now officially dead. Cowen. Ron. 2015-01-30. Nature. 10.1038/nature.2015.16830. 124938210.
- Planck Collaboration . etal . Planck 2018 results. I. Overview and the cosmological legacy of Planck . Astronomy and Astrophysics . 2020 . 641 . A1 . 10.1051/0004-6361/201833880 . 1807.06205 . 2020A&A...641A...1P. 119185252 .
- Planck Collaboration . etal . Planck 2018 results. V. CMB power spectra and likelihoods . Astronomy and Astrophysics . 2020 . 641 . A5 . 10.1051/0004-6361/201936386 . 1907.12875 . 2020A&A...641A...5P . 198985935 .
|
RESOURCES: UNDERSTANDING AUTISM SPECTRUM DISORDER
Autism spectrum disorder is a developmental disorder marked by impaired social interaction, limited communication, behavioral challenges, and a limited range of activities and interests. It has been estimated to affect 1 out of every 68 children in the United States and is five times more common in boys than in girls.
Children with autism can show a wide variety of behavioral symptoms, from failure to develop appropriate peer relationships to a delay in or a total lack of spoken language.
For children who do speak, there may be a repetitive use of language or a delay in the ability to sustain a conversation with others. Symptoms of autism can also include hyperactivity, short attention span, impulsivity, aggressiveness, self-injurious behavior, and temper tantrums.
Evidence-based autism treatment promotes the development of social and communication skills and minimizes behaviors that interfere with functioning and learning. Intensive, sustained evidence-based autism treatment can increase an individual’s ability to acquire language, learn, function in the community, and fulfill his/her potential.
The Diagnostic and Statistical Manual of Mental Disorders is the most widely accepted reference used for the classification and diagnosis of ASD. The most recent edition (DSM-5; American Psychiatric Association, 2013), redefined the diagnostic criteria for ASD, which was previously regarded as three distinct diagnoses (i.e., autistic disorder, pervasive developmental disorder—not otherwise specified, and Asperger’s disorder). The DSM-5, however, classifies ASD as a single disorder characterized by persistent deficits in social communication and social interaction, in addition to restricted, repetitive patterns of behavior, interests, or activities.
SIGNS AND SYMPTOMS
The presentation and severity of symptoms vary widely among children with ASD. For some children, early signs of ASD may be observed by age 12 months or earlier. Some common signs and symptoms of ASD include:
Lack of eye contact
Not responding appropriately to greetings
Difficulty initiating and maintaining conversations with others
Not responding appropriately to others’ gestures and facial expressions
Difficulty using gestures and facial expressions appropriately
Appearing to be unaware of others’ feelings
Not engaging in pretend play
Preferring to play alone
Repeating sounds, words, or phrases out of context
Becoming distressed by minor changes in routines
Performing repetitive movements, such as hand flapping or rocking
Playing with toys in unusual ways, for instance spinning them or lining them up
Having unusually strong attachments to particular objects
Limiting conversations to very specific topics
Exhibiting oversensitivity to sounds or textures
Appearing to be indifferent to pain
Experiencing delays or plateaus in skill development
Losing previously acquired skills
Displaying challenging behaviors, such as aggression, tantrums, and self-injury
The Modified Checklist for Autism in Toddlers, Revised (M-CHAT-R) is a free, validated screening tool that assesses a child’s risk for ASD. If you have concerns about your child’s development, express your concerns immediately to your child’s pediatrician and request a referral to a specialist who can perform more thorough assessments.
The Centers for Disease Control and Prevention (CDC) estimates that 1 in 68 children are diagnosed with autism spectrum disorder (ASD) in the United States. That is about 30% higher than the previously estimated rate of 1 in 59 reported in 2012. The factors contributing to increases in reported rates of ASD are not fully understood. While increased rates may be partially explained by improved screening and diagnostic practices, researchers are also exploring the roles of various environmental and genetic risk factors. CDC statistics reveal that ASD is present across all races, ethnicities, and socioeconomic groups. In addition, boys are more likely to develop ASD than girls.
There is no single known cause for ASD. Rather, evidence suggests that there are many factors involved in the development of ASD. Researchers are actively exploring the roles of various genetic and environmental risk factors.
Genetics have been found to play a significant role in the development of ASD. Evidence indicates that siblings of children with ASD are at an increased risk of developing ASD themselves. Research conducted on twins has found genetics to play a sizable role in the development of ASD. Additionally, rates of ASD are higher among children with various genetic disorders, including fragile X syndrome and tuberous sclerosis. Numerous gene mutations have been found to increase the risk of developing ASD by varying degrees. Sometimes gene mutations are inherited from a parent who carries the same gene mutation while other times gene mutations occur spontaneously. Advanced parental age, another risk factor for ASD, may increase the chance of genetic mutations that occur spontaneously as genetic material is copied over from parent to offspring.
In addition to genetic factors, a number of environmental factors have been found to increase the risk of developing ASD. Many environmental risk factors consist of prenatal exposures, including maternal contact with high levels of air pollution, maternal viral and bacterial infections and maternal ingestion of some prescription drugs including selective serotonin reuptake inhibitors, a type of antidepressant. On the other hand, prenatal vitamins ingested during pregnancy and the months preceding pregnancy have been found to reduce the risk of ASD. Birth complications involving oxygen deprivation are also associated with an increased risk of ASD.
Intensive behavioral intervention (IBI) is the only empirically validated treatment for ASD. Based on the principles of applied behavior analysis (ABA), IBI is conducted at a high intensity, typically between 30 and 40 hours per week, for multiple years. Evidence suggests that greater treatment intensity leads to superior outcomes. Evidence also indicates that IBI is more effective if initiated in early development; however, services initiated at any age are beneficial for the acquisition of valuable skills.
RESOURCES: AUTISM ASSOCIATIONS
FOR AUTISM RESOURCES IN YOUR AREA, visit LOVEMYPROVIDER.COM.
ACT Today! is a national nonprofit organization whose mission is to raise awareness and provide treatment services and support to families to help their children with autism achieve their full potential.
The Autism Society, the nation's leading grassroots autism organization, exists to improve the lives of all affected by autism.
Since 1967 ARI has focused on studying treatments that improve the quality of life--sometimes to the point of recovery--for those on the autism spectrum.
Autism Resources has information and links regarding the developmental disabilities autism andAsperger's Syndrome.
Autism Speaks aims to bring the autism community together as one strong voice to urge the government and private sector to take action to address this urgent global health crisis.
TheAutSpot.com is a free online community linking family members and specialists nationwide who deal with children diagnosed with autism.
The mission of the National Autism Association is to respond to the most urgent needs of the autism community, providing real help and hope so that all affected can reach their full potential.
The mission of NIMH is to transform the understanding and treatment of mental illnesses through basic and clinical research, paving the way for prevention, recovery, and cure.
We monitor all the major news sources, websites and the latest research for important and practical news and developments with a balanced, no-spin presentation.
TACA provides support and education for families affected by autism through local chapters, coffee talks, parent support, educational events, and more.
USAAA is a leading nonprofit organization for education, support, and solutions.
Inspire builds online health and wellness communities for patients and caregivers, in partnership with national patient advocacy organizations, and helps life science organizations connect with these highly engaged populations.
Autism Parenting Magazine aims to provide you with the most current information and interventions about Autism so that you can make the most informed decisions about what will benefit your child.
YOUR SOURCE FOR AUTISM INFORMATION
Autism Live is an interactive webshow providing support, resources, information, facts, entertainment and inspiration to parents, teachers and practitioners working with children on the Autism Spectrum.
Viewers are encouraged to participate by asking question of experts, offering suggestions for topics to be discussed and sharing progress their children have made.
Host Shannon Penrod
Shannon Penrod is the proud mother of a nine-year old who is recovering from Autism. Her son Jem was diagnosed at the age of two and a half after having lost virtually all of his language and social skills.
Helping her son on his journey through Autism became Shannon’s top priority. Whether it was researching new diets, learning the legal ins and outs of special education law or finding funding for ABA therapy, Shannon became her son’s best advocate and an advocate for many other families. In 2009, Shannon began the host and producer of Everyday Autism Miracles, a weekly radio show that focuses solely on Autism and hope.
An award winning stand-up comedienne, director and author, Shannon’s goal is to provide families with information and hope while on their journey through autism.
RESOURCES: BIOMEDICAL INTERVENTIONS
CARD's Position on Biomedical Treatments for Autism
ABA VS. FAD TREATMENTS
Autism is currently among the most controversial issues in American public health. Presumably because of the mysterious nature of the disorder, autism continues to be the focus of countless "fad treatments," the vast majority of which either lack scientific support or have been scientifically disproved, outright (e.g., facilitated communication, see Jacobson, Mulick, & Schwartz, 1995). Several independent review sources have consistently found that early intensive applied behavior analytic intervention (ABA) continues to be the only treatment for autism which is backed by substantial scientific evidence (NYSDH, 1999; Satcher, 1999). The effectiveness of ABA has been replicated yet again in two recent outcome studies (Howard, Sparkman, Cohen, Green, & Stanislaw, 2005; Sallows & Graupner, 2005).
A substantial percentage of children with autism currently receive "biomedical" treatments, despite a current lack of evidence to support or refute most of them. In a recent survey of parents of children with autism (Green, et al., 2005), 27% of parents reported that their children with autism currently receive treatment in the form of special diets and 43% reported using vitamin supplements.
THE CARD POSITION - SOUND EMPIRICAL RESEARCH
CARD's position on the use of biomedical treatments in clinical practice is centered around three basic points:
Many parents of children with autism believe that various biomedical treatments have been responsible for substantial improvement in their children;
Very little research has been conducted on the effects of biomedical treatments for autism; and
Parents must ultimately make the decision as to which treatments are appropriate for their children, regardless of diagnosis or disorder.
CARD's position on research on biomedical treatments and on the collaboration of behavior analysts with clinicians and researchers from other disciplines is based on the points elaborated above.
At least two factors contribute to what we perceive as a grievous need for sound empirical research on the effects of biomedical treatments for autism. First, as the Green, et al., (2005) study demonstrates, biomedical treatments are being implemented on a widespread basis. This fact alone is more than ample justification for conducting research on their safety and effectiveness. Such widespread use must be tempered with sound research. Second, many parents honestly believe that their children have significantly benefited from biomedical treatments. As clinicians and applied scientists, we have an ethical responsibility to take the concerns and beliefs of our clients seriously. To dismiss the convictions of our clients would be tantamount to disrespect for those who are mostly closely affected by autism. At the same time, ample research has demonstrated that clients can be made to believe that an intervention is or is not effective, regardless of the actual effects produced, and therefore opinion alone (be it that of a client or a scientist) is never to be accepted or rejected outright. Sound empirical treatment research is the only path toward addressing such concerns in a sufficient manner.
Throughout the history of science, particular schools of thought and areas of research have risen and fallen amidst ubiquitous controversy. Ultimately, controversy may have very little effect on which approaches to a problem are borne out. When a useful solution to a problem is discovered, and the results are replicated many times over, little is left to controversy. The use of ABA for children with autismwas once highly controversial but the unrelenting work of parents and the repeated and consistent replication of beneficial results in the scientific literature has moved the field of ABA closer to the mainstream. Most biomedical treatments for autism have not yet been subjected to repeated, rigorous outcome research. Thus conclusions regarding their effectiveness (either for or against) cannot be made.
SKEPTICISM: THE BEST DEFENSE
The best safeguard against controversy in the evaluation of scientific issues may be skepticism. Skepticism does not refer to disbelief. It refers to the practice of withholding judgment on a given topic until such time as sufficient evidence warrants judgment one way or the other (Shermer, 2002). In the absence of conclusive evidence, then, one might be advised to be skeptical of the view that an intervention works, as well as to be skeptical of the view that it does not. It is CARD’s position that conclusive evidence for or against the effectiveness of most biomedical treatments for autism does not at this time exist. Hence the urgent need for empirical investigation and the futility of blind acceptance or denial of the validity of the biomedical treatment of autism.
CONTROVERSY AND RESULTING POLARIZATION
A common feature of any controversy is polarization. A casual review of the major clinical, research, and advocacy factions within the field of autism today reveals that most parties ardently maintain that biomedical treatments are either extremely effective or a total sham. Many who advocate for biomedical treatments appear to believe that all biomedical treatments are equally effective and all are virtual cures for autism. Many who reject the utility of biomedical treatments, on the other hand, do so across the entire range of treatments, without regard to the particular case for or against the many divergent treatments which fall under the biomedical umbrella. Particularly given the still largely unknown etiology of autism, either position appears premature at the current time. Autism is a spectrum disorder which is comprised of millions of individuals who present with widely divergent characteristics. It is not yet even known whether all cases of autism share a common etiology. Given the widely divergent manifestations of the disorder, if any biomedical treatments are proven to be effective in the future, it seems unlikely that any particular one will prove equally effective for all persons with autism. Even less likely is the notion that all biomedical treatments will be equally effective or ineffective, if the large variability within autism is ignored.
Perhaps most disturbing is the notion that it is not possible to reliably evaluate the separate and combined effects of ABA and biomedical treatments for autism. This perspective is based on a fundamental misunderstanding of the nature of experimental scientific investigation. Virtually all disciplines of experimental science agree that experimentation consists of altering one variable at a time and observing the effects that the alteration produces on another variable. Sound experimentation depends on manipulating one variable at a time while simultaneously controlling for the influence of extraneous variables. To the extent that this is done (regardless of the particular scientific field or research topic), inferences can be made about the effects of one variable on another. There is nothing peculiar about autism, ABA, or biomedical treatments which preclude this sort of experimentation. It is common for clinicians (behavioral, medical, or other) to manipulate multiple variables simultaneously in order to bring about optimal treatment outcome and any time this is done it is likely to preclude precise analysis of which variables were responsible for improvement. In order to produce sound research on the separate and combined effects (if any) of each approach, experiments must hold one variable constant while manipulating another. This approach to autism research is largely untouched within the ABA and biomedical communities, but this fact does not preclude such research from being developed, and indeed it is currently under way at several research sites.
COLLABORATION BETWEEN BEHAVIOR ANALYSTS & MEDICAL PROFESSIONALS
Toward this end, CARD is currently collaborating with medical doctors to conduct sound research on biomedical treatments for autism. The focus of this effort is to identify which, if any, biomedical treatments result in which particular improvements for particular individuals with autism, given their unique biomedical and behavioral status. All research conducted is interdisciplinary in nature and all treatment studies evaluate multiple behavioral and biomedical measures. The goal is to establish a model for interdisciplinary collaboration between behavior analysts and medical doctors in researching treatment for individuals with autism. It is our hope that the research produced will forge a path toward addressing the debate regarding biomedical treatments for autism with sound scientific data, thereby displacing the current culture of hearsay and controversy.
In summary, the CARD position on the biomedical treatment of autism is not one of belief or disbelief, it is one of uncertainty. It is our hope that the coming decade will yield the evidence which is so desperately needed to transform the current debate about biomedical treatment from one based on subjective report to one which is grounded in sophisticated analysis of sound scientific data.
Green, V. A., Pituch, K. A., Itchon, J., Choi, A., O’Reilly, M., Sigafoos, J. (in press). Internet survey of treatments used by parents of children with autism. Research in Developmental Disabilities.
Howard, J. S., Sparkman, C. R., Cohen, H. G., Green, G., & Stanislaw, H. (2005). A comparison of intensive behavior analytic and eclectic treatments for young children with autism. Research in Developmental Disabilities, 26, 359-383.
Jacobson, J., Mulick, J. A., & Schwartz, A. A. (1995). A history of facilitated communication: Science, pseudoscience, and antiscience science working group on facilitated communication. Americal Psychologist, 50, 750-765.
New York State Department of Health Early Intervention Program. Clinical Practice Guideline: Report of the Recommendations, Autism/Pervasive Developmental Disorders, Assessment and Intervention for Young Children. 1999. Publication #4215. Health Education Services, P.O. Box 7126, Albany, NY 12224.
Sallows, G. O., & Graupner, T. D. (2005). Intensive behavioral treatment for children with autism: Four-year outcome and predictors American Journal on Mental Retardation, 110, 417-438.
Satcher, D. Mental health: A report of the surgeon general. U.S. Public Health Service. 1999. Bethesda, MD
Shermer, M. (2002). Why people believe weird things: Pseudoscience, superstition, and other confusions of our time. New York, NY: Henry Holt.
RESOURCES: AUTISM EDUCATION RIGHTS
Securing Funding for Your Child's Program
Under both State and Federal Law, your child is entitled to a Free and Appropriate Public Education (FAPE). Therefore, if it is established that the system of public education available to your child is not "appropriate" for his or her needs, it may be possible to secure funding for an ABA program which may be considered more appropriate to meet the educational needs of your child.
You may be able to secure partial funding from your school district or local education authority for the educational needs of your child, and partial funding from your local regional center or Medicaid agency for the behavioral needs of your child. In some circumstances, education authorities and Medicaid or similar state agencies work out a formula to share the cost of your child's ABA program.
If your child is under the age of three, you will not be eligible to gain funding from your school district. Because of this, you may be able to gain the bulk of your funding from a commercial insurance plan and/or state Medicaid or Health Department agency. To gain partial or full funding for your child's ABA program from such an agency, you must contact the service coordinating agency in your area (such as a Regional Center) and request that an assessment of your child be conducted. A Medicaid caseworker will be assigned to you and an intake and subsequent multidisciplinary assessment will be arranged. After the assessment an Individual Family Service Plan (IFSP) meeting will be conducted. The purpose of the meeting is to discuss the results of the multidisciplinary assessment and propose goals and objectives. Once goals and objectives are agreed upon, placement and services that will assist your child in meeting those goals will be discussed. At this time, you may be offered behavioral services. It is your responsibility to ascertain the level of experience of the vendor offered to you and also to determine if the number of hours you have been offered is appropriate. Therefore, it would be necessary for you to gain the advice of a well-known private ABA professional if you are unsure about the options given to you by the state agency.
If your child is over the age of three, you may be able to gain funding from your local education authority (school district). If your child already qualifies for special education services and has a classroom placement and/or has been receiving special services, you may request a change in services and/or placement. In other words, you may request that the district fund an ABA program. In order to request such funding, you must ask your district to hold an Individualized Education Plan (IEP) meeting. All such requests should be made in writing. If you are approaching your school district for the first time, its staff will want to ascertain the eligibility of your child for special education services, and an assessment will be scheduled. Following the assessment, an IEP meeting will be arranged to discuss the results of the evaluation and propose goals and objectives. Once goals and objectives are agreed upon, placement and services will be offered. Once again, if you have questions about the appropriateness of the offer, it would be necessary for you to consult with a private ABA professional who specializes in such treatment. Some school districts provide ABA programs themselves, and CARD recommends that you observe and evaluate the programs offered by your local education authority as these programs may be able to meet your child's needs.
INDIVIDUALS WITH DISABILITIES EDUCATION ACT (IDEA)
IDEA, the Individuals with Disabilities Education Act, mandates that eligible children with disabilities have available to them special education and related services designed to address their unique and individual needs. IDEA has six principles that provide the framework around which special education services are designed and provided to students with disabilities. These principles include:
Free Appropriate Public Education (FAPE)
Individualized Education Program (IEP)
Least Restrictive Environment
Parent and Student Participation in Decision Making
Free Appropriate Public Education: IDEA guarantees that each child with a disability will have available a free appropriate public education, often identified as FAPE. FAPE refers to special education and related services that:
"Have been provided at public expense, under public supervision and direction, and without charge
Meet the standards of the State Educational Agency
Include an appropriate preschool, elementary or secondary school education
Are provided in conformity with the Individual Education Program (IEP)" (Section 602(8))
"Appropriate" is the critical term in FAPE - the education that a child with disabilities receives needs to address his or her specific and individual educational needs. As such, what is "appropriate" for one student may not be "appropriate" for another. Determining what is "appropriate" for each student involves several processes. First, an individualized evaluation is conducted. The purpose of the evaluation is to identify the student's areas of strength and weakness in as much detail as possible. The next step is for the IEP team to discuss and develop an IEP for the student. The IEP team generates and identifies appropriate goals and objectives for the student to work on throughout the year. Furthermore, placement and the type of special education and related services appropriate for the student are identified. This decision is based on the goals and objectives that have been developed as well as the child's individual needs. In addition to specifying an appropriate placement, the team must identify and provide the supplementary aids and services in order for the child to succeed in the given educational setting.
Appropriate Evaluation: This principle assures that all children with disabilities are appropriately assessed for the purpose of determining eligibility, educational programming, and individual performance monitoring. Moreover, evaluation activities should gather information related to enabling the child to be involved in the general curriculum. Furthermore, testing and evaluation materials must be selected and administered so as not to be racially or culturally discriminatory. The information gained through the evaluation should be used to assist in the determination of what an "appropriate" education would be for the child.
INDIVIDUAL EDUCATION PROGRAM:
The term Individualized Education Program or IEP refers to "a written statement for each child with a disability that is developed, written and as appropriate, revised at least once per year." Each child's IEP contains statements as to the following:
The child's present levels of educational performance including how the child's disability affects his or her involvement in the general curriculum
Measurable annual goals including benchmarks or short-term objectives
The special education and related services and supplementary aids and services to be provided
The program modifications or supports that will be provided to the child
An explanation of the extent to which the child will not participate with nondisabled students in the regular class and in extracurricular and nonacademic activities
Any individual modifications made in the administration of State and District wide assessments
The projected date for the beginning of services, and the anticipated frequency, location and duration of those services
How the child's progress toward the annual goals will be measured and how the parents will be kept regularly informed of that progress, at least as often as they are informed of their nondisabled children's progress
LEAST RESTRICTIVE ENVIRONMENT:
The Least Restrictive Environment (LRE) is determined based upon each child's individual needs. The law's presumption is that the student should be educated in the regular classroom with nondisabled children, with supplementary aids and services provided as necessary to enable the student to succeed in that setting. A student's placement in the general education classroom is the first option the IEP team must consider. If it is determined that a student cannot be educated in the general education classroom, even when supplementary aids and services are provided, an alternative placement must be considered. As such, schools are required to ensure that a continuum of alternative placements are available. These may include, but are not limited to, special classes, special schools or home instruction.
PARENT AND STUDENT PARTICIPATION IN DECISION MAKING:
Schools are required to involve each child's parent(s) in the development of the child's IEP. Parents must be notified, must give consent, and parent's input must be solicited and considered. Students may be members of the IEP team and participate to the extent possible.
There are three main components of the law's procedural safeguards
To protect the rights of children with disabilities and their parents
To ensure that the children and their parents are provided with the information needed to make decisions about the provision of FAPE
Procedures and mechanisms are in place to resolve disagreements between parties
Autism Learning Games: Camp Discovery
Teach your child new skills with Autism Learning Games: Camp Discovery! Developed by the Center for Autism and Related Disorders (CARD), Camp Discovery offers a growing suite of learning games that appeal to children and are approved by parents, teachers and therapists. Based on CARD’s renowned curriculum, Camp Discovery creates fun learning opportunities for children with autism and is an excellent learning tool for children ages 2 and up.
BASED ON RESEARCH
The teaching methods used in Camp Discovery were designed by experts in the field of autism and are grounded in scientifically-proven behavioral principles, including reinforcement and prompting. For correct responses, players receive visual and auditory rewards to keep them motivated and engaged. After incorrect responses, a unique prompting system guides the player to answer correctly. Although Camp Discovery was designed for children with autism, all children can benefit from its carefully developed teaching methods.
Many features set Camp Discovery apart from other children’s learning apps:
Every Learning Trial Begins with a Preference Assessment Rewards for correct responses are personalized for each player based on preferences determined in the assessment. Since each player is unique and preferences will change, the assessment is repeated regularly to keep the game reinforcing.
Fun Mini Games Motivate Learning Mini games serve as both a break from learning and an exciting reward for completing rounds.
Progress Reports Track Improvement Camp Discovery allows you to track your child’s progress across games and generates graphs to show you how much your child is learning.
Parents are in Control Multiple settings can be adjusted to personalize your child’s learning experience.
SIMPLE USER INTERFACE
The Camp Discovery interface was designed for easy use. The games are voice narrated, and responses require dragging and dropping or touching flashcards.
In the News
Search Quality Software, “Mobile Project Manager Fosters Collaboration and Helps Autistic Kids”, February 6, 2014
The Cents’able Shopping, “FREE Download of Amazing Autism Learning Games Camp Discovery App for Kids”, January 19, 2014
Cult of Mac, “Camp Discovery Uses The iPad to Teach Kids With Autism”, January 16, 2014
Cydia Repo, “Camp Discovery App Review : Autism Learning Games” , January 16, 2014
The iPhone Mom, “Autism Learning Games: Camp Discovery Review”, January 15, 2014
Smart Apps for Kids, ”Good Free App of the Day: the amazing Autism Learning Games: Camp Discovery!”, January 15, 2014
WBNS-TV, “App Helps Kids with Autism“, January 15, 2014
Advanced Healthcare Network, “New Autism App Gives Kids Ability to Learn”, January 15, 2014
Dexinger, “New Autism App Gives Kids Ability to Learn and Parents Means to Track Progress”, January 14, 2014
Available for iPad here: https://itunes.apple.com/us/app/camp-discovery-objects/id585678823?mt=8
Available for Android devices here: https://play.google.com/store/apps/details?id=card.
RESOURCES: PUBLIC POLICY
Medicaid-Funded Services in a School Setting
Background on Location of Services in Autism Treatment: For behavioral health treatment services, the location, such as a school or community setting, may be an integral part of the treatment plan and may be necessary to ensure treatment goals are met, especially generalization of skills across settings. That is, for treatment to be effective, skills must be generalized across all natural environments, and the school is a natural environment for a school-age child. Specifically, medically necessary autism treatment may be provided in a school setting (a) to ensure that skills acquired in the home and community generalize to the school setting; (b) when the behavior occurs in the school setting; (c) or simply as a matter of logistics to ensure that a child’s treatment is delivered at a sufficient level of intensity (i.e., number of hours per week).
EPSDT Mandate: Medicaid’s Early and Periodic Screening, Diagnostic, and Treatment (EPSDT) benefit for children is intentionally more robust than the Medicaid services required for adults. EPSDT requires the state to provide services that are medically necessary in community settings, including at school. EPSDT requires that the state make provisions to allow children to receive ABA treatment in school when delivering services in that setting is medically indicated.
Medicaid Services vs. IEP Services: The Medicaid mandate to provide coverage for medically necessary treatment is a much higher standard than the duty of the school under IDEA to provide Free Appropriate Public Education (FAPE). Services provided by a school under an IEP do not preclude or supplant medically necessary services that are being provided across all natural settings, including the school. Medically necessary services funded by Medicaid target goals in the treatment plan, which addresses the deficits and behaviors associated with the child’s autism diagnosis; Medicaid-funded, school-based services do not address educational/academic goals. Schools typically do not provide medically necessary treatment; they may provide supports pursuant to a different standard (some educational benefit), for different purposes (to access the educational curriculum) with differently credentialed providers (special education teachers and aides). Such services do not supplant medically necessary treatment and do not relieve Medicaid of its obligation to fund such treatment.
Relevant Case Law and CMS Guidance: When this issue has been litigated, the Court has consistently determined that Medicaid agencies are responsible for funding of medically necessary treatment, regardless of the location where it occurs, including school settings. Given advances in medical care and the goal of independence reflected in the Medicaid Act, services cannot be limited to the home and must also be paid for in the school setting. Providing behavioral services in natural settings in which a child with autism functions, such as schools, is considered a best practice.
Contact: Julie Kornack
Director of Public Policy
Contributions from National Health Law Project and Daniel Unumb, Esq.
Autism in South Africa
The Star Academy in South Africa is amongst the most successful CARD affiliate sites. Star Academy has center based facilities for autism in South Africa
with Academies in Johannesburg, Pretoria and Durban. In an effort to disseminate ABA treatment for children with autism beyond South Africa, Star Academy also maintains its own affiliate sites in Ghana, Zimbabwe and offers services to children with autism around South Africa and in Africa. Star Academy continues to shine as an example of how high-quality ABA treatment can be disseminated to regions of the world that previously had little to no treatment available. www.thestaracademy.co.za
|
What happens when black holes collide?
It is possible for two black holes to collide. Once they come so close that they cannot escape each other’s gravity, they will merge to become one bigger black hole. Such an event would be extremely violent. Even when simulating this event on powerful computers, we cannot fully understand it. However, we do know that a black hole merger would produce tremendous energy and send massive ripples through the space-time fabric of the Universe. These ripples are called gravitational waves.
Nobody has witnessed a collision of black holes yet. However, there are many black holes in the Universe and it is not preposterous to assume that they might collide. In fact, we know of galaxies in which two supermassive black holes move dangerously close to each other. Theoretical models predict that these black holes will spiral toward each other until they eventually collide.
Gravitational waves have never been directly observed. However, they are a fundamental prediction of Einstein’s theory of general relativity. Detecting them would provide an important test of our understanding of gravity. It would also provide important new insights into the physics of black holes. Large instruments capable of detecting gravitational waves from outer space have been built in recent years. Even more powerful instruments are under construction. The moment they detect their first gravitational wave, you are sure to hear about it!
Black Hole Collision May Have Exploded With Light
In a first, astronomers may have seen light from the merger of two black holes, providing opportunities to learn about these mysterious dark objects.
When two black holes spiral around each other and ultimately collide, they send out gravitational waves – ripples in space and time that can be detected with extremely sensitive instruments on Earth. Since black holes and black hole mergers are completely dark, these events are invisible to telescopes and other light-detecting instruments used by astronomers. However, theorists have come up with ideas about how a black hole merger could produce a light signal by causing nearby material to radiate.
Astronomy: What is a black hole, and what happens when two collide?
A black hole is an area in space where the gravity is so strong that nothing, not even light, can escape.
It is an area where a massive amount of matter has been squeezed together into a small area. And where you have mass, you have gravity.
They are not tears or “holes” in the universe – they are real objects. In fact, black holes are spheres, like a ball sitting in space just like Earth or the Sun.
|
Table of Contents
General Information about the Spinal Column
Overview of the spine
The human spine (vertebral column) is the most important anatomical and functional axis of the body. It consists of a total of 7 cervical vertebrae, 12 thoracic vertebrae and 5 lumbar vertebrae and is limited cranially by the skull and caudally by the sacrum.
The sacrum is a bony structure consisting of 5 coalesced sacrum vertebrae. Because the sacral vertebrae, however, are no longer motion segments due to the merger, they are not counted as real vertebrae of the spine. The same is true for the tailbone (coccyx), which represents a fusion of 4 coccyx vertebrae, which no longer contain motion segments.
Some people have 6 lumbar vertebrae (lumbarization of S1) or 3-6 coccyx vertebrae; this however has no pathogenic effect on the skeletal system. Except in the segments C0/C1 and C1/C2 a disc is located between each vertebra, which primarily takes over buffering and protection functions.
Considering the spine laterally, a clear S-curvature can be seen. The S-shape is determined by a lordosis of the cervical and lumbar spine, as well as a kyphosis of the thoracic spine. The S-shape serves to mitigate longitudinal forces; secondary and lateral shear forces, even if the muscles intercept them more than the osseous and capsular ligament structures of the column vertebrae.
The spine is covered by strong band systems that ensure stability and limit certain movements, so that they cannot inflict any harm to the surrounding structures. Furthermore, the spine makes up the spinal canal, which serves as a port for our spinal cord and spinal nerves that pass into peripheral neural structures outside of the canal.
The mobile segment
The motion segment is the functional unit of the spine, which is formed by two adjacent vertebrae (e.g. C6 and C7 in the cervical spine). In addition, the following structures are part of the mobile segment:
- The space between the vertebrae and the intervertebral disc
- Vertebral arches
- Spinous and transverse process sets
- All soft tissue present in the segment
- The spinal nerve in the segment
- The skin area innervated by segment spinal nerve
The individual systems are anatomically and functionally perfectly matched and form a functional unit. Pathophysiologically however this means that a malfunction in one of the parameters has an immediate negative impact on the other structures.
The ventral system – consisting of the vertebral body and intervertebral disc – has the primary task to incorporate axial compressive forces and to pass them on, so that they can be compensated without causing damage. Furthermore, the ventral system is capable of limiting movements. The dorsal system consists of the vertebral arch joints, muscles and ligaments and serves the implementation and inhibition of movement.
Looking at the vertebral body laterally, the block-like vertebral body, which is forming the so-called borders cranially and caudally, can be observed. These borders are bony ridges that limit the bearing surface of the disc onto the vertebral bodies and act as a point of fixation for fibrocartilage.
In the transverse section, the vertebrae are structured anteriorly convex and posteriorly straight. The osseous body itself is made of a cancellous bone structure, which is encased in cortical bone. These structures ensure fault lines within the corpus, running cranial to caudal to the spine and ensure the transmission during axial compression. If this system did not exist, the lightest pressure forces would cause the vertebral body to fracture. The base- and endplates covered with hyaline cartilage close the corpus from the disc, the disc surface facing the spinal disc consists of fibrocartilage.
The vertebral arch, also called neural arch, is present posteriorly to the vertebral body. It is formed by lamina dorsally and pedicles laterally. In the transition region between pedicle and lamina, the articular processes are present, cranial (superior articular process) and caudal (inferior articular process), which, together with the respective adjacent segments, make up the so-called facet joints. The lamina consists of two symmetrical halves, which are fused bones.
From the vertebral arch, three processes project. The spinous process (or spine) is present in the posterior midline, while the two transverse processes are present on the either side at the junction of pedicle and lamina.
The vertebral foramen is a large central opening that is formed between the vertebral body anteriorly and vertebral arch posteriorly. The diameter and shape of the vertebral foramen varies, depending on the segment level, as a result of the differences in size of the vertebral bodies, depending on the spine section and their positioning in the cavity.
Due to its “building-block appearance” of the spine, the vertebral foramina collectively form the vertebral canal through which the spinal cord run along with its meninges and associated structures. In the lumbar spine, the transition region between the spinal canal and intervertebral foramen is described as a lateral recess, which is lined with epidural fatty tissue. Here the nerve root passes through.
The spinal canal
The spinal canal (or vertebral canal) forms a passage opening for a number of important structures of the human body. This contribution is limited to the neuronal structures spinal cord, and the spinal dura mater.
The spinal cord is a core element of the central nervous system. It ends at L1/L2 with the conus medullaris, which continues caudally in the cauda equina. Furthermore, the spinal cord consists of cervical and lumbosacral enlargements, from which the cervical and lumbar spinal nerves emerge and form their respective plexus.
Spinal dura mater
The dura mater is the outer sheath of the meninges, which consists of collagen and elastic fibers. The spinal dura mater is only localized on the canalis. The other portion of the dura mater is located in the skull and is called cranial dura mater due to its location.
The transverse process springs from the vertebral arch of each vertebra from the junction of the pedicle and the lamina, and differs depending on the segment in size, shape and orientation in the cavity.
In the cervical spine, the transverse process bears anterior and posterior tubercles. Between these bony structures, the transverse foramen is located, which serves as a pass-through opening for the vertebral artery.
In the thoracic spine, the transverse processes are dominantly established and are equipped with articulating surfaces, which articulate with the tubercle of the ribs.
In the lumbar spine, the transverse process bears three tubercles / processes i.e. costiform process, mamillary process and accessory process.
The spinous process (or spine) is the posterior bony projection of the vertebral arch. It is a fixation point for muscles and ligaments and is also a noticeable starting point in palpation diagnostics. It splits up in two (kind of like a fish tail) in the segments C2 – C6 and merges again from C7 on. In the thoracic spine, the spinous processes are very long and have an oblique caudal course, while in the lumbar region they are formed high and are more heavily built.
The intervertebral foramen is formed by the superior and inferior vertebral notches of each adjacent vertebral segment and serves as a passage opening for the spinal nerve, meningeal branch, spinal artery and intervertebral vein.
Four vertebral articular processes (two superior and two inferior) spring from each vertebral arch; ending with an articular surface, the articular facet, and a vertebral arch joint.
The facet joints
The facet joints are real joints, each of which consists of cartilage joint covered surfaces and a joint capsule. They absorb compressive forces and transmit them so that movements can be performed selectively and without injury. The control is achieved via proprioceptive receptors in the capsular ligaments.
Due to the spatial alignment of the joint surfaces, certain movements can be made better or worse than in other portions of the spinal column. The angle of inclination of the articular surfaces in the cervical spine is approximately 45 degrees, relative to the horizontal, while this angle is significantly higher in the thoracic spine, approximately 80 degrees. In the lumbar region, the angle of inclination is about 90 degrees, which makes the rotation in the lumbar spine significantly lower than that of the thoracic or cervical spine.
The human body has 23 intervertebral discs as there are no discs located between the two upper cervical segments (the atlas and the axis). Their diameter varies according to the size of the spine “column” because of the different pressure loading conditions. An intervertebral disc is divided into two parts: the annulus fibrosis and the nucleus pulposus.
The annulus fibrosis is composed of several layers and encloses the spinal disc nucleus, the nucleus pulposus. It is composed of 60 – 70% of water. The layers are composed of collagen fibers running in different directions depending on the layer. This makes sense for compiling enough compressive and tensile strength during every movement, functioning as a shock absorber, and protecting the discs from immediate rupture. The annulus is attached by its sharp fibers to the edge strips of the vertebral body and is adhered dorsally with the posterior longitudinal ligament. It shares low fiber contents with the anterior longitudinal ligament.
The nucleus pulposus is a gel-like substance inside the disc. It consists of a combination of collagen fibers and glycans (proteoglycan, glycosaminoglycan), resulting in a high matrix formation and, therefore, strong water binding tilt. The nucleus pulposus contains neither vessels nor nerves and serves the voltage shift of discs during spinal movements. It receives its nutrition from the surrounding vessels by the process of diffusion. Overnight, the disc takes on liquid, which increases its height. This is the reason why a person is taller in the morning after a restful night’s sleep following a tiring day.
When a herniated disc nucleus occurs, the nucleus emerges through a tear of the annulus fibrosis and compresses the nearby spinal nerve roots, which may result in pain and neural deficits.
Ligament system of the spinal column
A motion segment can only be fully understood with its functional ligament structures. They stabilize the spine in the neutral zero position and limit movements before they fall into pathological proportions and delicate structures can be irritated or damaged.
Posterior longitudinal ligament
This ligament originates at the clivus of the occipital bone and runs the entire dorsal side of the spine to the ventral edge of the sacrum canal. On its way, it connects with the annulus fibrosis of the intervertebral discs in each segment, as well as the marginal edges of the vertebral bodies. Its purpose is stabilizing the posterior intervertebral disc area and controlling or limiting flexion.
Anterior longitudinal ligament
The anterior longitudinal ligament has its origin at the anterior tubercle of atlas. It runs anterior along the vertebral bodies and is attached to the same. In contrast to the posterior longitudinal ligament, the anterior ligament is connected to the intervertebral discs with only a few fibers. In addition to its stabilizing function, this ligament controls and limits the extension and lateral flexion of the spine.
As the name suggests, the interspinous ligaments extend from spinous process to spinous process of the adjacent vertebral bodies. They also connect to the supraspinous (dorsal) and the flava (ventral) ligaments. In the cervical spine, they become a part of the lig. nuchae and serves to stabilize, control and limit flexion, lateral flexion and rotation.
This ligament extends from spinous process to spinous process of the adjacent vertebral bodies of C7 to the sacrum. It assumes the same functions as the interspinous ligaments, namely stabilization, control and limitation of flexion, lateral flexion and rotation.
Like the supraspinous ligament, the intertransverse ligament also connects the spinous process tips of adjacent vertebral bodies. It limits both flexion and lateral flexion.
The flava ligaments extend over the dorsolateral spinal canal and originate from the vertebral arch lamina of two adjacent vertebrae, or insert into them. In the thoracic and lumbar spine, they are connected to the joint capsule of the zygapophysial (facet joint) type. It is one of the strongest ligaments of the spine and stabilizes the vertebral arch joint during flexion.
Movement axes of the spinal column
The movement axes of the spine are located in the intervertebral disc.
- The horizontal axis for flexion is located ventrally, the one for extension—dorsally.
- The sagittal axis for lateral flexion is located laterally—this means that the left lateral flexion corresponds to the left, and the right lateral flexion corresponds to the right.
- Finally, the longitudinal axis for rotation is located almost exactly in the middle of the disc.
Flexion and extension
If someone performs a flexion of the spine, the intervertebral disc and the ventral vertebrae are compressed, and the inferior articular surface of the cranial vertebra slides cranially. In biomechanics, this phenomenon is called divergence. Conversely, during spine extension, the dorsal portions are compressed, the intervertebral foramen narrows and the facets of the facet joints lock. This movement is called convergence.
Lateral flexion and rotation
During lateral flexion, the concave side of the vertebral joints converges, while divergence takes place on the contralateral side. This causes the foramen to narrow on the side to which the lateral flexion is then performed. During rotation, a contralateral sliding of the facies superior articular takes place. In the cervical spine, this movement always takes place coupled with a lateral flexion.
Diseases of the spinal column
For the purpose of clarity, this portion will be limited to just three examples.
When the water content in the body, and therefore also the disc, decreases with advancing age, the amount of collagen fibers increases. The result is hardening of the nucleus pulposus and decrease in disc height. Thus, the cover plates are more stressed and cracking occurs in the annulus fibrosis. This results in sclerotherapy, or a rupture of the annulus, which may cause a prolapse.
Prolapsed (Slipped) disc
If all tissue layers of the annulus fibrosis rupture, the gel-like nucleus pulposus leaks outwardly and compresses the delicate tissue (colloquially herniated disc). If the nucleus enters into the spinal canal and irritates the spinal nerve, pain can develop and neuronal lesions (such as paresthesia or motor paralysis) can occur. Depending on the height of the segment at which the prolapse occurs, it can also lead to the loss of the bladder rectum function (cauda equina syndrome). If the nucleus dissolves completely from the annulus, this is called sequestration. If the nucleus only penetrates partially, and without the annulus suffering a complete tear, this is known as a protrusion in medicine.
Spinal stenosis refers to a narrowing phenomenon, which can be the result of a prolapse, but also tumors or inflammation with edema. There is a central and lateral stenosis, depending on the localization of the pathogenic event. Depending on the location and severity, the spinal nerve, spinal cord or blood vessels may be affected by the compression.
Common Exam Questions about the Spinal Column
The answers can be found below the references.
Where in the human spine is there no intervertebral disc?
- L1/L2 + L2/L3
- Th11/Th12 + Th12/L1
- C0/C1 + C1/C2
- L4/L5 + L5/S1
- An intervertebral disc is located between each vertebra.
In which segments is the spinous process split?
- C1 – C4
- C1 – C7
- C2 – C7
- C2 – C6
- C3 – C6
Which structure extends the movement axes of a vertebral segment?
- Vertebral arch
- Vertebral foramen
- Zygapophysial joint
- Intervertebral disc
|
t-Tests Explained: t-Values and
T-tests allow us to take two groups of samples and check if there is a significant enough difference between them. As a hypothesis testing tool, they can be used to determine whether the observed irregularity is a coincidence, or if it can be transposed to the entire population.
To exemplify, let’s say we have a crop field to which we are going to apply a new type of fertilizer. In order to tell if the chemical has an effect on the harvest, we will take a sample of crops before the treatment, and then another one after. Running a t-test on the samples will tell us if there is a substantial difference between “before” and “after”, and if the findings are valid for the whole field.
This post examines how t-tests are performed using t-values and t-distributions, which will help us understand probability calculation and hypothesis assessment.
We'll focus on clear explanations of the core concepts — t-values and t-distributions — and use graphs rather than numbers and equations for illustration.
What Are t-Values?
The very term t-test reflects that the test results are based purely on t-values. T-value is what statisticians refer to as a test statistic, and it is calculated from your sample data during hypothesis tests. It is then used to compare your data to what is expected under s.c. null hypothesis. If the resulting t-value is extreme enough, it means that you have encountered a deviation from the null hypothesis, significant enough to allow you to reject the null.
The way it works is as follows. During any type of t-test, your sample data (size and variability) is processed and distilled to a single number - the t-value. The result equal to zero means that your data matches the null hypothesis and there are no irregularities found. The increase in the absolute value of the t-value signifies that the difference between the sample data and the null hypothesis is also increasing.
However, the analysis doesn't stop there. The difficulty is that the t-value is a unitless statistic and isn't very informative by itself. Say, for instance, the calculations performed under a t-test resulted in a t-value of 2. In this case, we know that there is a difference from the null hypothesis because the result is not zero. But just how common or rare is this difference, given that the null hypothesis is correct? "2" tells us nothing about that.
In order to make any assumptions in this regard, we need to look at the t-value in a broader context. Such as provided by t-distributions.
What Are t-Distributions?
The first concept we need to familiarize ourselves with is sampling distribution. Here is the basic idea. A single t-test generates a single t-value. So, to get a distribution, we need to take multiple random samples of equal size from the same population and feed them through the same t-test. This way we will get a spread of t-values, which can be plotted as a sampling distribution - a special case of probability distributions.
Afterward, the plotting process is rather straight-forward. The properties of t-distributions are well-understood and you don't need to collect too many samples to complete the task. What defines any specific t-distribution is its degrees of freedom (DF). This value is closely related to sample size, therefore, t-distribution characteristics will differ depending on the sample size we choose to work with.
A visual representation of t-distribution is a graph showing a spread of t-values obtained from a population where the null hypothesis is true. The sampling distribution is used to estimate the consistency of your results against the null theory.
Using t-Distribution to Check Your Sample Results Against the Null Hypothesis
When we process a random set of samples from a certain population and plot a t-distribution, we assume that the null hypothesis is correct for this population. To find out how much your data deviates from the null hypothesis, apply your study’s t-value to the t-distribution.
The sampling distribution graph above shows a t-distribution with 20 degrees of freedom. This corresponds to a sample size of 21 in a one-sample t-test. The fact that the graph peaks right at zero indicates that we are most likely to obtain a t-value close to the null hypothesis, and least likely the further we move away from zero in either direction. This is evident from the assumption that the null hypothesis is true.
The hypothetical value of 2 that we assumed earlier is marked on the graph, and it demonstrates where our sample data is located compared to the null hypothesis. We see that, even though the probability isn't very high, there is a fair chance of detecting a t-value from -2 to +2.
So, there is a positive difference between our sample data and the null-hypothesis. OK. And we see that the t-value of 2 is rare. So far so good. But we still don't know how rare it is precisely. Ultimately, in the scope of this analysis, we want to be able to tell if our findings are exceptional enough to justify rejecting the null hypothesis. We'll be able to do this after we've calculated the probability.
How to Use t-Values and t-Distributions to Calculate Probabilities
Performing any hypothesis test implies taking the test statistic from a specific sample and placing it within the context of a known probability distribution. T-tests are no exception: taking a t-value and placing it in the context of the calculated t-distribution enables you to derive the probabilities associated with that t-value. In case a t-value is exceptional enough when the null hypothesis is true, we can reject the null.
Before we proceed with calculating the probability associated with our hypothetical value of 2, there are two critical points to clarify.
We'll be using t-values of +2 and -2 because we are currently looking at the results of s.c. two-tailed test. This type of tests lets you evaluate if the sample average is greater or less than the target value in a 1-sample t-test. A one-tail t-test will only show a difference either in a positive or negative direction.
It is only possible to calculate meaningful probability for a range of t-values. As we see in the graph below, a range of t-values corresponds to a specific area shaded under a distribution curve - that's the probability. The probability for a single point equals zero because a single point creates no such area.
Interpreting t-Test Results for Our Hypothetical Example
With these points in mind, we're able to read the graph below: it shows the probability associated with t-values less than -2 and greater than +2. The graph corresponds to our t-test design (1-sample t-test, with the number of samples of 21).
To interpret this distribution plot we say that each shaded region has a probability of 0.02963, meaning the total probability is 0.05926. We can also conclude that t-values will fall within these areas near 6% of the time, given the null hypothesis is true.
Some of you may already be familiar with this type of probability - it's called the p-value. It's fair to say that chances of t-values falling within these regions seem low. However, it's not quite low enough to reject the null hypothesis, applying the general significance level of 0.05.
t-Distributions and Sample Size
As previously mentioned, t-distributions are determined by the degrees of freedom, which are closely related to sample size. The more the DF increases, the more the probability density in the tails decreases. Along with that the distribution becomes more tightly grouped around the central mark. Bulkier tails mean that there is a greater chance of t-values being far removed from zero, even when the null hypothesis is true. Such shape variation is a means for t-distributions to respond to growing uncertainty that stems from processing smaller samples.
The graph below displays the difference in probability distribution plots between t-distributions for 5 and 30 DF.
Averages from smaller samples are usually less accurate. The reason is it's less unlikely to get an exceptional t-value with a smaller sample. This has a direct effect on both p-values and probabilities. To exemplify, a t-value of 2 in a two-tailed test for 5 and 30 DF will have p-values of 10.2% and 5.4% accordingly. The bottom line is whenever possible, aim at using bigger samples.
Whether you have to compare a sample mean to a hypothesized or target value, compare means of two independent samples, or find a difference between paired samples, there are specific types of t-tests that will allow you to do just that. Conveniently, there are also tools (such as QuickCalcs by GraphPad) you can use to perform the necessary calculations.
With this guide, we wanted to provide a clear, down-to-earth explanation of what t-tests are, as well as to show the basics of how to run them and interpret the results. Let us know if you found it helpful, and please, feel free to explore our blog further for more great content! In case you need help on any data research or analytics project, get in touch with us for full assistance!
|
Real versus nominal value (economics)
In economics, nominal value is measured in terms of money, whereas real value is measured against goods or services. A real value is one which has been adjusted for inflation, enabling comparison of quantities as if the prices of goods had not changed on average. Changes in value in real terms therefore exclude the effect of inflation. In contrast with a real value, a nominal value has not been adjusted for inflation, and so changes in nominal value reflect at least in part the effect of inflation.
Prices and inflationEdit
A representative collection of goods, or commodity bundle, is used for comparison purposes, to measure inflation. The nominal (unadjusted) value of the commodity bundle in a given year depends on prices current at the time, whereas the real value of the commodity bundle, if it is truly representative, in aggregate remains the same. The real values of individual goods or commodities may rise or fall against each other, in relative terms, but a representative commodity bundle as a whole retains its real value as a constant over time.
A price index is calculated relative to a base year. Indices are typically normalized at 100 in the base year. Starting from a base (or reference) year, a price index Pt represents the price of the commodity bundle over time t. In base year zero, P0 is set to 100. If for example the base year is 1992, real values are expressed in constant 1992 dollars, with the price level defined as 100 for 1992. If, for example, the price of the commodity bundle has increased in the first year by 1%, then Pt rises from P0 = 100 to P1 = 101.
The inflation rate between year t - 1 and year t is:
- change in price / price in year t - 1
The price index is applied to adjust the nominal value Q of a quantity, such as wages or total production, to obtain its real value. The real value is the value expressed in terms of purchasing power in the base year.
The index price divided by its base-year value, gives the growth factor of the price index.
Real values can be found by dividing the nominal value by the growth factor of a price index. Using the price index growth factor as a divisor for converting a nominal value into a real value, the real value in year t relative to the base year 0 is:
Real growth rateEdit
The real growth rate is the change from one period to the next of a nominal quantity in real terms. It measures by how much the buying power of the quantity has changed.
- is the nominal growth rate of ,
- is the inflation rate.
For values of between −1 and 1, we have the Taylor series
Hence as a first-order (i.e. linear) approximation,
Real wages and real gross domestic productEdit
The bundle of goods used to measure the Consumer Price Index (CPI) is applicable to consumers. So for wage earners as consumers, an appropriate way to measure real wages (the buying power of wages) is to divide the nominal wage (after-tax) by the growth factor in the CPI.
Gross domestic product (GDP) is a measure of aggregate output. Nominal GDP in a particular period reflects prices which were current at the time, whereas real GDP compensates for inflation. Price indices and the U.S. National Income and Product Accounts are constructed from bundles of commodities and their respective prices. In the case of GDP, a suitable price index is the GDP price index. In the U.S. National Income and Product Accounts, nominal GDP is called GDP in current dollars (that is, in prices current for each designated year), and real GDP is called GDP in [base-year] dollars (that is, in dollars that can purchase the same quantity of commodities as in the base year).
|If for years 1 and 2 (possibly a span of 20 years apart), the nominal wage and price level P of goods are respectively
then real wages using year 1 as the base year are respectively:
The real wage each year measures the buying power of the hourly wage in common terms. In this example, the real wage rate increased by 20 percent, meaning that an hour's wage would buy 20% more goods in year 2 compared with year 1.
Real interest ratesEdit
As was shown in the section above on the real growth rate,
- is the rate of increase of a quantity in real terms,
- is the rate of increase of the same quantity in nominal terms, and
- is the rate of inflation,
and as a first-order approximation,
Looking back into the past, the ex post real interest rate is approximately the historical nominal interest rate minus inflation. Looking forward into the future, the expected real interest rate is approximately the nominal interest rate minus the expected inflation rate.
Not only time-series data, as above, but also cross-section data which depends on prices which may vary geographically for example, can be adjusted in a similar way. For example, the total value of a good produced in a region of a country depends on both the amount and the price. To compare the output of different regions, the nominal output in a region can be adjusted by repricing the goods at common or average prices.
- Aggregation problem
- Classical dichotomy
- Constant Item Purchasing Power Accounting
- Cost-of-living index
- Financial repression
- Fisher equation
- Index (economics)
- Inflation accounting
- Money illusion
- National accounts
- Neutrality of money
- Real interest rate
- Real prices and ideal prices
- Template:Inflation – for price conversions in Wikipedia articles
- Diewert, W. E. (2008) . "Index Numbers". The New Palgrave Dictionary of Economics (2nd ed.). pp. 1–32. doi:10.1057/978-1-349-95121-5_940-2. ISBN 978-1-349-95121-5.
- O'Donnell, R. (1987). "Real and Nominal Quantities". The New Palgrave: A Dictionary of Economics. v. 4. pp. 97–98. (Adam Smith's early distinction vindicated)
- Sen, Amartya (1979). "The Welfare Basis of Real Income Comparisons: A Survey". Journal of Economic Literature. 17 (1): 1–45. JSTOR 2723639.
- Usher, D. (1987). "Real Income". The New Palgrave: A Dictionary of Economics. v. 4. pp. 104–05.
|
Appendix A.3: Using Graphs and Charts to Show Values of Variables
- Understand and use time-series graphs, tables, pie charts, and bar charts to illustrate data and relationships among variables.
You often see pictures representing numerical information. These pictures may take the form of graphs that show how a particular variable has changed over time, or charts that show values of a particular variable at a single point in time. We will close our introduction to graphs by looking at both ways of conveying information.
One of the most common types of graphs used in economics is called a time-series graph. A time-series graph shows how the value of a particular variable or variables has changed over some period of time. One of the variables in a time-series graph is time itself. Time is typically placed on the horizontal axis in time-series graphs. The other axis can represent any variable whose value changes over time.
The table in Panel (a) of Figure 35.18 “A Time-Series Graph” shows annual values of the unemployment rate, a measure of the percentage of workers who are looking for and available for work but are not working, in the United States from 1998 to 2007. The grid with which these values are plotted is given in Panel (b). Notice that the vertical axis is scaled from 3 to 8%, instead of beginning with zero. Time-series graphs are often presented with the vertical axis scaled over a certain range. The result is the same as introducing a break in the vertical axis, as we did in Figure 35.5 “Canceling Games and Reducing Shaquille O’Neal’s Earnings”
The values for the U.S. unemployment rate are plotted in Panel (b) of Figure 35.18 “A Time-Series Graph”. The points plotted are then connected with a line in Panel (c).
Scaling the Vertical Axis in Time-Series Graphs
The scaling of the vertical axis in time-series graphs can give very different views of economic data. We can make a variable appear to change a great deal, or almost not at all, depending on how we scale the axis. For that reason, it is important to note carefully how the vertical axis in a time-series graph is scaled.
Consider, for example, the issue of whether an increase or decrease in income tax rates has a significant effect on federal government revenues. This became a big issue in 1993, when President Clinton proposed an increase in income tax rates. The measure was intended to boost federal revenues. Critics of the president’s proposal argued that changes in tax rates have little or no effect on federal revenues. Higher tax rates, they said, would cause some people to scale back their income-earning efforts and thus produce only a small gain—or even a loss—in revenues. Op-ed essays in The Wall Street Journal, for example, often showed a graph very much like that presented in Panel (a) of Figure 35.19 “Two Tales of Taxes and Income”. It shows federal revenues as a percentage of gross domestic product (GDP), a measure of total income in the economy, since 1960. Various tax reductions and increases were enacted during that period, but Panel (a) appears to show they had little effect on federal revenues relative to total income.
Laura Tyson, then President Clinton’s chief economic adviser, charged that those graphs were misleading. In a Wall Street Journal piece, she noted the scaling of the vertical axis used by the president’s critics. She argued that a more reasonable scaling of the axis shows that federal revenues tend to increase relative to total income in the economy and that cuts in taxes reduce the federal government’s share. Her alternative version of these events does, indeed, suggest that federal receipts have tended to rise and fall with changes in tax policy, as shown in Panel (b) of Figure 35.19 “Two Tales of Taxes and Income”.
Which version is correct? Both are. Both graphs show the same data. It is certainly true that federal revenues, relative to economic activity, have been remarkably stable over the past several decades, as emphasized by the scaling in Panel (a). But it is also true that the federal share has varied between about 17 and 20%. And a small change in the federal share translates into a large amount of tax revenue.
It is easy to be misled by time-series graphs. Large changes can be made to appear trivial and trivial changes to appear large through an artful scaling of the axes. The best advice for a careful consumer of graphical information is to note carefully the range of values shown and then to decide whether the changes are really significant.
Testing Hypotheses with Time-Series Graphs
John Maynard Keynes, one of the most famous economists ever, proposed in 1936 a hypothesis about total spending for consumer goods in the economy. He suggested that this spending was positively related to the income households receive. One way to test such a hypothesis is to draw a time-series graph of both variables to see whether they do, in fact, tend to move together. Figure 35.20 “A Time-Series Graph of Disposable Income and Consumption” shows the values of consumption spending and disposable income, which is after-tax income received by households. Annual values of consumption and disposable income are plotted for the period 1960–2007. Notice that both variables have tended to move quite closely together. The close relationship between consumption and disposable income is consistent with Keynes’s hypothesis that there is a positive relationship between the two variables.
The fact that two variables tend to move together in a time series does not by itself prove that there is a systematic relationship between the two. Figure 35.21 “Stock Prices and a Mystery Variable” shows a time-series graph of monthly values in 1987 of the Dow Jones Industrial Average, an index that reflects the movement of the prices of common stock. Notice the steep decline in the index beginning in October, not unlike the steep decline in October 2008.
It would be useful, and certainly profitable, to be able to predict such declines. Figure 35.21 “Stock Prices and a Mystery Variable” also shows the movement of monthly values of a “mystery variable,” X, for the same period. The mystery variable and stock prices appear to move closely together. Was the plunge in the mystery variable in October responsible for the stock crash? The answer is: Not likely. The mystery value is monthly average temperatures in San Juan, Puerto Rico. Attributing the stock crash in 1987 to the weather in San Juan would be an example of the fallacy of false cause.
Notice that Figure 35.21 “Stock Prices and a Mystery Variable” has two vertical axes. The left-hand axis shows values of temperature; the right-hand axis shows values for the Dow Jones Industrial Average. Two axes are used here because the two variables, San Juan temperature and the Dow Jones Industrial Average, are scaled in different units.
We can use a table to show data. Consider, for example, the information compiled each year by the Higher Education Research Institute (HERI) at UCLA. HERI conducts a survey of first-year college students throughout the United States and asks what their intended academic majors are. The table in Panel (a) of Figure 35.22 “Intended Academic Major Area, 2007 Survey of First-Year College Students” shows the results of the 2007 survey. In the groupings given, economics is included among the social sciences.
Panels (b) and (c) of Figure 35.22 “Intended Academic Major Area, 2007 Survey of First-Year College Students” present the same information in two types of charts. Panel (b) is an example of a pie chart; Panel (c) gives the data in a bar chart. The bars in this chart are horizontal; they may also be drawn as vertical. Either type of graph may be used to provide a picture of numeric information.
- A time-series graph shows changes in a variable over time; one axis is always measured in units of time.
- One use of time-series graphs is to plot the movement of two or more variables together to see if they tend to move together or not. The fact that two variables move together does not prove that changes in one of the variables cause changes in the other.
- Values of a variable may be illustrated using a table, a pie chart, or a bar chart.
The table in Panel (a) shows a measure of the inflation rate, the percentage change in the average level of prices below. Panels (b) and (c) provide blank grids. We have already labeled the axes on the grids in Panels (b) and (c). It is up to you to plot the data in Panel (a) on the grids in Panels (b) and (c). Connect the points you have marked in the grid using straight lines between the points. What relationship do you observe? Has the inflation rate generally increased or decreased? What can you say about the trend of inflation over the course of the 1990s? Do you tend to get a different “interpretation” depending on whether you use Panel (b) or Panel (c) to guide you?
Answer to Try It!
Here are the time-series graphs, Panels (b) and (c), for the information in Panel (a). The first thing you should notice is that both graphs show that the inflation rate generally declined throughout the 1990s (with the exception of 1996, when it increased). The generally downward direction of the curve suggests that the trend of inflation was downward. Notice that in this case we do not say negative, since in this instance it is not the slope of the line that matters. Rather, inflation itself is still positive (as indicated by the fact that all the points are above the origin) but is declining. Finally, comparing Panels (b) and (c) suggests that the general downward trend in the inflation rate is emphasized less in Panel (b) than in Panel (c). This impression would be emphasized even more if the numbers on the vertical axis were increased in Panel (b) from 20 to 100. Just as in Figure 35.19 “Two Tales of Taxes and Income”, it is possible to make large changes appear trivial by simply changing the scaling of the axes.
Panel (a) shows a graph of a positive relationship; Panel (b) shows a graph of a negative relationship. Decide whether each proposition below demonstrates a positive or negative relationship, and decide which graph you would expect to illustrate each proposition. In each statement, identify which variable is the independent variable and thus goes on the horizontal axis, and which variable is the dependent variable and goes on the vertical axis.
- An increase in national income in any one year increases the number of people killed in highway accidents.
- An increase in the poverty rate causes an increase in the crime rate.
- As the income received by households rises, they purchase fewer beans.
- As the income received by households rises, they spend more on home entertainment equipment.
- The warmer the day, the less soup people consume.
Suppose you have a graph showing the results of a survey asking people how many left and right shoes they owned. The results suggest that people with one left shoe had, on average, one right shoe. People with seven left shoes had, on average, seven right shoes. Put left shoes on the vertical axis and right shoes on the horizontal axis; plot the following observations:
Left shoes 1 2 3 4 5 6 7 Right shoes 1 2 3 4 5 6 7
Is this relationship positive or negative? What is the slope of the curve?
Suppose your assistant inadvertently reversed the order of numbers for right shoe ownership in the survey above. You thus have the following table of observations:
Left shoes 1 2 3 4 5 6 7 Right shoes 7 6 5 4 3 2 1
Is the relationship between these numbers positive or negative? What’s implausible about that?
Suppose some of Ms. Alvarez’s kitchen equipment breaks down. The following table gives the values of bread output that were shown in Figure 35.12 “A Nonlinear Curve” It also gives the new levels of bread output that Ms. Alvarez’s bakers produce following the breakdown. Plot the two curves. What has happened?
A B C D E F G Bakers/day 0 1 2 3 4 5 6 Loaves/day 0 400 700 900 1,000 1,050 1,075 Loaves/day after breakdown 0 380 670 860 950 990 1,005
Steven Magee has suggested that there is a relationship between the number of lawyers per capita in a country and the country’s rate of economic growth. The relationship is described with the following Magee curve.
What do you think is the argument made by the curve? What kinds of countries do you think are on the upward- sloping region of the curve? Where would you guess the United States is? Japan? Does the Magee curve seem plausible to you?
Draw graphs showing the likely relationship between each of the following pairs of variables. In each case, put the first variable mentioned on the horizontal axis and the second on the vertical axis.
- The amount of time a student spends studying economics and the grade he or she receives in the course
- Per capita income and total expenditures on health care
- Alcohol consumption by teenagers and academic performance
- Household income and the likelihood of being the victim of a violent crime
|
Linguistics is the scientific study of language. It involves analysis of language form, language meaning, and language in context, as well as an analysis of the social, cultural, historical, and political factors that influence language.
Linguists traditionally analyse human language by observing the relationship between sound and meaning. Meaning can be studied in its directly spoken or written form through the field of semantics, as well as in its indirect form through body language and gestures under the discipline of pragmatics. Each speech sound particle is called a phoneme. How these phonemes are organised to convey meaning depends on various linguistic patterns and structures that theoretical linguists describe and analyse.
Some of these patterns of sound and meaning are found in the study of morphology (concerning how words are formulated through "morphemes"), syntax (how sentences are logically structured), and phonology (the study of sound patterns). The emergence of historical and evolutionary linguistics has also led to a greater focus over studying how languages change and grow, particularly over an extended period of time. Sociolinguists also study how language develops among different communities through dialects, and how each language changes, grows, and varies from person to person and group to group.
Macrolinguistic concepts include the study of narrative theory, stylistics, discourse analysis, and semiotics. Microlinguistic concepts, on the other hand, involve the analysis of grammar, speech sounds, palaeographic symbols, connotation, and logical references, all of which can be applied to lexicography, editing, language documentation, translation, as well as speech-language pathology (a corrective method to cure phonetic disabilities and disfunctions).
The earliest activities in the documentation and description of language have been attributed to the 6th-century-BC Indian grammarian Pini who wrote a formal description of the Sanskrit language in his Adhyy. Today, modern-day theories on grammar employ many of the principles that were laid down back then.
Historical linguistics is the study of language change, particularly with regards to a specific language or a group of languages. Western trends in historical linguistics date back to roughly the late 18th century, when the discipline grew out of philology (the study of ancient texts and antique documents).
Historical linguistics emerged as one of the first few sub-disciplines in the field, and was most widely practiced during the late 19th century. Despite a shift in focus in the twentieth century towards formalism and generative grammar, which studies the universal properties of language, historical research today still remains a significant field of linguistic inquiry. Subfields of the discipline include language change and grammaticalisation.
Historical linguistics studies language change either diachronically (through a comparison of different time periods in the past and present) or in a synchronic manner (by observing developments between different variations that exist within the current linguistic stage of a language).
At first, historical linguistics served as the cornerstone of comparative linguistics, which involves a study of the relationship between different languages. During this time, scholars of historical linguistics were only concerned with creating different categories of language families, and reconstructing prehistoric proto languages by using the comparative method and the method of internal reconstruction. Internal reconstruction is the method by which an element that contains a certain meaning is re-used in different contexts or environments where there is a variation in either sound or analogy.
The reason for this had been to describe well-known Indo-European languages, many of which used to have long written histories. Scholars of historical linguistics also studied Uralic languages, another European language family for which very little written material existed back then. After this, there was significant work that followed on the corpora of other languages too, such as that of the Austronesian languages as well as of Native American language families.
The above approach of comparativism in linguistics is now, however, only a small part of the much broader discipline called historical linguistics. The comparative study of specific Indo-European languages is considered a highly specialised field today, while comparative research is carried out over the subsequent internal developments in a language. In particular, it is carried out over the development of modern standard varieties of languages, or over the development of a language from its standardised form to its varieties.
For instance, some scholars also undertook a study attempting to establish super-families, linking, for example, Indo-European, Uralic, and other language families to Nostratic. While these attempts are still not widely accepted as credible methods, they provide necessary information to establish relatedness in language change, something that is not easily available as the depth of time increases. The time-depth of linguistic methods is generally limited, due to the occurrence of chance word resemblances and variations between language groups, but a limit of around 10,000 years is often assumed for the functional purpose of conducting research. Difficulty also exists in the dating of various proto languages. Even though several methods are available, only approximate results can be obtained in terms of arriving at dates for these languages.
Today, with a subsequent re-development of grammatical studies, historical linguistics studies the change in language on a relational basis between dialect to dialect during one period, as well as between those in the past and the present period, and looks at evolution and shifts taking place morphologically, syntactically, as well as phonetically.
Syntax and morphology
Syntax and morphology are branches of linguistics concerned with the order and structure of meaningful linguistic units such as words and morphemes. Syntacticians study the rules and constraints that govern how speakers of a language can organize words into sentences. Morphologists study similar rules for the order of morphemessub-word units such as prefixes and suffixesand how they may be combined to form words.
While words, along with clitics, are generally accepted as being the smallest units of syntax, in most languages, if not all, many words can be related to other words by rules that collectively describe the grammar for that language. For example, English speakers recognize that the words dog and dogs are closely related, differentiated only by the plurality morpheme "-s", only found bound to noun phrases. Speakers of English, a fusional language, recognize these relations from their innate knowledge of English's rules of word formation. They infer intuitively that dog is to dogs as cat is to cats; and, in similar fashion, dog is to dog catcher as dish is to dishwasher. By contrast, Classical Chinese has very little morphology, using almost exclusively unbound morphemes ("free" morphemes) and depending on word order to convey meaning. (Most words in modern Standard Chinese ["Mandarin"], however, are compounds and most roots are bound.) These are understood as grammars that represent the morphology of the language. The rules understood by a speaker reflect specific patterns or regularities in the way words are formed from smaller units in the language they are using, and how those smaller units interact in speech. In this way, morphology is the branch of linguistics that studies patterns of word formation within and across languages and attempts to formulate rules that model the knowledge of the speakers of those languages.
Phonological and orthographic modifications between a base word and its origin may be partial to literacy skills. Studies have indicated that the presence of modification in phonology and orthography makes morphologically complex words harder to understand and that the absence of modification between a base word and its origin makes morphologically complex words easier to understand. Morphologically complex words are easier to comprehend when they include a base word.
Polysynthetic languages, such as Chukchi, have words composed of many morphemes. The Chukchi word "tmeylevtptrkn", for example, meaning "I have a fierce headache", is composed of eight morphemes t--mey--levt-pt--rkn that may be glossed. The morphology of such languages allows for each consonant and vowel to be understood as morphemes, while the grammar of the language indicates the usage and understanding of each morpheme.
The discipline that deals specifically with the sound changes occurring within morphemes is morphophonology.
Semantics and pragmatics
Semantics and pragmatics are branches of linguistics concerned with all meaning. These subfields have traditionally been divided by the role of linguistic and social context in the determination of meaning. Semantics in this conception is concerned with core meanings and pragmatics concerned with meaning in context. Pragmatics encompasses speech act theory, conversational implicature, talk in interaction and other approaches to language behavior in philosophy, sociology, linguistics and anthropology. Unlike semantics, which examines meaning that is conventional or "coded" in a given language, pragmatics studies how the transmission of meaning depends not only on structural and linguistic knowledge (grammar, lexicon, etc.) of the speaker and listener but also on the context of the utterance, any pre-existing knowledge about those involved, the inferred intent of the speaker, and other factors. In that respect, pragmatics explains how language users are able to overcome apparent ambiguity since meaning relies on the manner, place, time, etc. of an utterance.
Phonetics and phonology
Phonetics and phonology are branches of linguistics concerned with sounds (or the equivalent aspects of sign languages). Phonetics is largely concerned with the physical aspects of sounds such as their acoustics, production, and perception. Phonology is concerned with the linguistic abstractions and categorizations of sounds.
Languages exist on a wide continuum of conventionalization with blurry divisions between concepts such as dialects and languages. Languages can undergo internal changes which lead to the development of subvarieties such as linguistic registers, accents, and dialects. Similarly, languages can undergo changes caused by contact with speakers of other languages, and new language varieties may be born from these contact situations through the process of language genesis.
Contact varieties such as pidgins and creoles are language varieties that often arise in situations of sustained contact between communities that speak different languages. Pidgins are language varieties with limited conventionalization where ideas are conveyed through simplified grammars that may grow more complex as linguistic contact continues. Creole languages are language varieties similar to pidgins but with greater conventionalization and stability. As children grow up in contact situations, they may learn a local pidgin as their native language. Through this process of acquisition and transmission, new grammatical features and lexical items are created and introduced to fill gaps in the pidgin eventually developing into a complete language.
Not all language contact situations result in the development of a pidgin or creole, and researchers have studied the features of contact situations that make contact varieties more likely to develop. Often these varieties arise in situations of colonization and enslavement, where power imbalances prevent the contact groups from learning the other's language but sustained contact is nevertheless maintained. The subjugated language in the power relationship is the substrate language, while the dominant language serves as the superstrate. Often the words and lexicon of a contact variety come from the superstrate, making it the lexifier, while grammatical structures come from the substrate, but this is not always the case.
A dialect is a variety of language that is characteristic of a particular group among the language's speakers. The group of people who are the speakers of a dialect are usually bound to each other by social identity. This is what differentiates a dialect from a register or a discourse, where in the latter case, cultural identity does not always play a role. Dialects are speech varieties that have their own grammatical and phonological rules, linguistic features, and stylistic aspects, but have not been given an official status as a language. Dialects often move on to gain the status of a language due to political and social reasons. Other times, dialects remain marginalized, particularly when they are associated with marginalized social groups. Differentiation amongst dialects (and subsequently, languages) is based upon the use of grammatical rules, syntactic rules, and stylistic features, though not always on lexical use or vocabulary. The popular saying that "a language is a dialect with an army and navy" is attributed as a definition formulated by Max Weinreich.
"We may as individuals be rather fond of our own dialect. This should not make us think, though, that it is actually any better than any other dialect. Dialects are not good or bad, nice or nasty, right or wrong they are just different from one another, and it is the mark of a civilised society that it tolerates different dialects just as it tolerates different races, religions and sexes."
When a dialect is documented sufficiently through the linguistic description of its grammar, which has emerged through the consensual laws from within its community, it gains political and national recognition through a country or region's policies. That is the stage when a language is considered a standard variety, one whose grammatical laws have now stabilised from within the consent of speech community participants, after sufficient evolution, improvisation, correction, and growth. The English language, besides perhaps the French language, may be examples of languages that have arrived at a stage where they are said to have become standard varieties.
As constructed popularly through the SapirWhorf hypothesis, relativists believe that the structure of a particular language is capable of influencing the cognitive patterns through which a person shapes his or her world view. Universalists believe that there are commonalities between human perception as there is in the human capacity for language, while relativists believe that this varies from language to language and person to person. While the SapirWhorf hypothesis is an elaboration of this idea expressed through the writings of American linguists Edward Sapir and Benjamin Lee Whorf, it was Sapir's student Harry Hoijer who termed it thus. The 20th century German linguist Leo Weisgerber also wrote extensively about the theory of relativity. Relativists argue for the case of differentiation at the level of cognition and in semantic domains. The emergence of cognitive linguistics in the 1980s also revived an interest in linguistic relativity. Thinkers like George Lakoff have argued that language reflects different cultural metaphors, while the French philosopher of language Jacques Derrida's writings, especially about deconstruction, have been seen to be closely associated with the relativist movement in linguistics, for which he was heavily criticized in the media at the time of his death.
Linguistic structures are pairings of meaning and form. Any particular pairing of meaning and form is a Saussurean sign. For instance, the meaning "cat" is represented worldwide with a wide variety of different sound patterns (in oral languages), movements of the hands and face (in sign languages), and written symbols (in written languages). Linguistic patterns have proven their importance for the knowledge engineering field especially with the ever-increasing amount of available data.
Linguists focusing on structure attempt to understand the rules regarding language use that native speakers know (not always consciously). All linguistic structures can be broken down into component parts that are combined according to (sub)conscious rules, over multiple levels of analysis. For instance, consider the structure of the word "tenth" on two different levels of analysis. On the level of internal word structure (known as morphology), the word "tenth" is made up of one linguistic form indicating a number and another form indicating ordinality. The rule governing the combination of these forms ensures that the ordinality marker "th" follows the number "ten." On the level of sound structure (known as phonology), structural analysis shows that the "n" sound in "tenth" is made differently from the "n" sound in "ten" spoken alone. Although most speakers of English are consciously aware of the rules governing internal structure of the word pieces of "tenth", they are less often aware of the rule governing its sound structure. Linguists focused on structure find and analyze rules such as these, which govern how native speakers use language.
Grammar is a system of rules which governs the production and use of utterances in a given language. These rules apply to sound as well as meaning, and include componential subsets of rules, such as those pertaining to phonology (the organisation of phonetic sound systems), morphology (the formation and composition of words), and syntax (the formation and composition of phrases and sentences). Modern frameworks that deal with the principles of grammar include structural and functional linguistics, and generative linguistics.
Sub-fields that focus on a grammatical study of language include the following.
- Phonetics, the study of the physical properties of speech sound production and perception, and delves into their acoustic and articulatory properties
- Phonology, the study of sounds as abstract elements in the speaker's mind that distinguish meaning (phonemes)
- Morphology, the study of morphemes, or the internal structures of words and how they can be modified
- Syntax, the study of how words combine to form grammatical phrases and sentences
- Semantics, the study of the meaning of words (lexical semantics) and fixed word combinations (phraseology), and how these combine to form the meanings of sentences as well as manage and resolve ambiguity.
- Pragmatics, the study of how utterances are used in communicative acts, and the role played by situational context and non-linguistic knowledge in the transmission of meaning
- Discourse analysis, the analysis of language use in texts (spoken, written, or signed)
- Stylistics, the study of linguistic factors (rhetoric, diction, stress) that place a discourse in context
- Semiotics, the study of signs and sign processes (semiosis), indication, designation, likeness, analogy, metaphor, symbolism, signification, and communication
Discourse is language as social practice (Baynham, 1995) and is a multilayered concept. As a social practice, discourse embodies different ideologies through written and spoken texts. Discourse analysis can examine or expose these ideologies. Discourse influences genre, which is chosen in response to different situations and finally, at micro level, discourse influences language as text (spoken or written) at the phonological or lexico-grammatical level. Grammar and discourse are linked as parts of a system. A particular discourse becomes a language variety when it is used in this way for a particular purpose, and is referred to as a register. There may be certain lexical additions (new words) that are brought into play because of the expertise of the community of people within a certain domain of specialization. Registers and discourses therefore differentiate themselves through the use of vocabulary, and at times through the use of style too. People in the medical fraternity, for example, may use some medical terminology in their communication that is specialized to the field of medicine. This is often referred to as being part of the "medical discourse", and so on.
The lexicon is a catalogue of words and terms that are stored in a speaker's mind. The lexicon consists of words and bound morphemes, which are parts of words that can't stand alone, like affixes. In some analyses, compound words and certain classes of idiomatic expressions and other collocations are also considered to be part of the lexicon. Dictionaries represent attempts at listing, in alphabetical order, the lexicon of a given language; usually, however, bound morphemes are not included. Lexicography, closely linked with the domain of semantics, is the science of mapping the words into an encyclopedia or a dictionary. The creation and addition of new words (into the lexicon) is called coining or neologization, and the new words are called neologisms.
It is often believed that a speaker's capacity for language lies in the quantity of words stored in the lexicon. However, this is often considered a myth by linguists. The capacity for the use of language is considered by many linguists to lie primarily in the domain of grammar, and to be linked with competence, rather than with the growth of vocabulary. Even a very small lexicon is theoretically capable of producing an infinite number of sentences.
Stylistics also involves the study of written, signed, or spoken discourse through varying speech communities, genres, and editorial or narrative formats in the mass media. It involves the study and interpretation of texts for aspects of their linguistic and tonal style. Stylistic analysis entails the analysis of description of particular dialects and registers used by speech communities. Stylistic features include rhetoric, diction, stress, satire, irony, dialogue, and other forms of phonetic variations. Stylistic analysis can also include the study of language in canonical works of literature, popular fiction, news, advertisements, and other forms of communication in popular culture as well. It is usually seen as a variation in communication that changes from speaker to speaker and community to community. In short, Stylistics is the interpretation of text.
In the 1960s, Jacques Derrida, for instance, further distinguished between speech and writing, by proposing that written language be studied as a linguistic medium of communication in itself. Palaeography is therefore the discipline that studies the evolution of written scripts (as signs and symbols) in language. The formal study of language also led to the growth of fields like psycholinguistics, which explores the representation and function of language in the mind; neurolinguistics, which studies language processing in the brain; biolinguistics, which studies the biology and evolution of language; and language acquisition, which investigates how children and adults acquire the knowledge of one or more languages.
The fundamental principle of humanistic linguistics is that language is an invention created by people. A semiotic tradition of linguistic research considers language a sign system which arises from the interaction of meaning and form. The organisation of linguistic levels is considered computational. Linguistics is essentially seen as relating to social and cultural studies because different languages are shaped in social interaction by the speech community. Frameworks representing the humanistic view of language include structural linguistics, among others.
Structural analysis means dissecting each linguistic level: phonetic, morphological, syntactic, and discourse, to the smallest units. These are collected into inventories (e.g. phoneme, morpheme, lexical classes, phrase types) to study their interconnectedness within a hierarchy of structures and layers. Functional analysis adds to structural analysis the assignment of semantic and other functional roles that each unit may have. For example, a noun phrase may function as the subject or object of the sentence; or the agent or patient.
Functional linguistics, or functional grammar, is a branch of structural linguistics. In the humanistic reference, the terms structuralism and functionalism are related to their meaning in other human sciences. The difference between formal and functional structuralism lies in the way that the two approaches explain why languages have the properties they have. Functional explanation entails the idea that language is a tool for communication, or that communication is the primary function of language. Linguistic forms are consequently explained by an appeal to their functional value, or usefulness. Other structuralist approaches take the perspective that form follows from the inner mechanisms of the bilateral and multilayered language system.
Other linguistics frameworks take as their starting point the notion that language is a biological phenomenon in humans. Generative Grammar is the study of an innate linguistic structure. In contrast to structural linguistics, Generative Grammar rejects the notion that meaning or social interaction affects language. Instead, all human languages are based on a crystallised structure which may have been caused by a mutation exclusively in humans. The study of linguistics is considered as the study of this hypothesised structure.
Cognitive Linguistics, in contrast, rejects the notion of innate grammar, and studies how the human brain creates linguistic constructions from event schemas, and the impact of cognitive constraints and biases on human language. Similarly to neuro-linguistic programming, language is approached via the senses. Cognitive linguists study the embodiment of knowledge by seeking expressions which relate to modal schemas.
A closely related approach is evolutionary linguistics which includes the study of linguistic units as cultural replicators. It is possible to study how language replicates and adapts to the mind of the individual or the speech community. Construction grammar is a framework which applies the meme concept to the study of syntax.
The generative versus evolutionary approach are sometimes called formalism and functionalism, respectively. This reference is however different from the use of the terms in human sciences.
Linguistics is primarily descriptive. Linguists describe and explain features of language without making subjective judgments on whether a particular feature or usage is "good" or "bad". This is analogous to practice in other sciences: a zoologist studies the animal kingdom without making subjective judgments on whether a particular species is "better" or "worse" than another.
Prescription, on the other hand, is an attempt to promote particular linguistic usages over others, often favouring a particular dialect or "acrolect". This may have the aim of establishing a linguistic standard, which can aid communication over large geographical areas. It may also, however, be an attempt by speakers of one language or dialect to exert influence over speakers of other languages or dialects (see Linguistic imperialism). An extreme version of prescriptivism can be found among censors, who attempt to eradicate words and structures that they consider to be destructive to society. Prescription, however, may be practised appropriately in language instruction, like in ELT, where certain fundamental grammatical rules and lexical items need to be introduced to a second-language speaker who is attempting to acquire the language.
The objective of describing languages is often to uncover cultural knowledge about communities. The use of anthropological methods of investigation on linguistic sources leads to the discovery of certain cultural traits among a speech community through its linguistic features. It is also widely used as a tool in language documentation, with an endeavour to curate endangered languages. However, linguistic inquiry now uses the anthropological method to understand cognitive, historical, sociolinguistic and historical processes that languages undergo as they change and evolve, as well as general anthropological inquiry uses the linguistic method to excavate into culture. In all aspects, anthropological inquiry usually uncovers the different variations and relativities that underlie the usage of language.
Most contemporary linguists work under the assumption that spoken data and signed data are more fundamental than written data. This is because
- Speech appears to be universal to all human beings capable of producing and perceiving it, while there have been many cultures and speech communities that lack written communication;
- Features appear in speech which aren't always recorded in writing, including phonological rules, sound changes, and speech errors;
- All natural writing systems reflect a spoken language (or potentially a signed one), even with pictographic scripts like Dongba writing Naxi homophones with the same pictogram, and text in writing systems used for two languages changing to fit the spoken language being recorded;
- Speech evolved before human beings invented writing;
- Individuals learn to speak and process spoken language more easily and earlier than they do with writing.
Nonetheless, linguists agree that the study of written language can be worthwhile and valuable. For research that relies on corpus linguistics and computational linguistics, written language is often much more convenient for processing large amounts of linguistic data. Large corpora of spoken language are difficult to create and hard to find, and are typically transcribed and written. In addition, linguists have turned to text-based discourse occurring in various formats of computer-mediated communication as a viable site for linguistic inquiry.
The study of writing systems themselves, graphemics, is, in any case, considered a branch of linguistics.
Before the 20th century, linguists analysed language on a diachronic plane, which was historical in focus. This meant that they would compare linguistic features and try to analyse language from the point of view of how it had changed between then and later. However, with Saussurean linguistics in the 20th century, the focus shifted to a more synchronic approach, where the study was more geared towards analysis and comparison between different language variations, which existed at the same given point of time.
At another level, the syntagmatic plane of linguistic analysis entails the comparison between the way words are sequenced, within the syntax of a sentence. For example, the article "the" is followed by a noun, because of the syntagmatic relation between the words. The paradigmatic plane on the other hand, focuses on an analysis that is based on the paradigms or concepts that are embedded in a given text. In this case, words of the same type or class may be replaced in the text with each other to achieve the same conceptual understanding.
Before the 20th century, the term philology, first attested in 1716, was commonly used to refer to the study of language, which was then predominantly historical in focus. Since Ferdinand de Saussure's insistence on the importance of synchronic analysis, however, this focus has shifted and the term philology is now generally used for the "study of a language's grammar, history, and literary tradition", especially in the United States (where philology has never been very popularly considered as the "science of language").
Although the term "linguist" in the sense of "a student of language" dates from 1641, the term "linguistics" is first attested in 1847. It is now the usual term in English for the scientific study of language, though linguistic science is sometimes used.
Linguistics is a multi-disciplinary field of research that combines tools from natural sciences, social sciences, formal sciences, and the humanities. Many linguists, such as David Crystal, conceptualize the field as being primarily scientific. The term linguist applies to someone who studies language or is a researcher within the field, or to someone who uses the tools of the discipline to describe and analyse specific languages.
The formal study of language began in India with Pini, the 6th century BC grammarian who formulated 3,959 rules of Sanskrit morphology. Pini's systematic classification of the sounds of Sanskrit into consonants and vowels, and word classes, such as nouns and verbs, was the first known instance of its kind. In the Middle East, Sibawayh, a Persian, made a detailed description of Arabic in AD 760 in his monumental work, Al-kitab fii an-nahw ( , The Book on Grammar), the first known author to distinguish between sounds and phonemes (sounds as units of a linguistic system). Western interest in the study of languages began somewhat later than in the East, but the grammarians of the classical languages did not use the same methods or reach the same conclusions as their contemporaries in the Indic world. Early interest in language in the West was a part of philosophy, not of grammatical description. The first insights into semantic theory were made by Plato in his Cratylus dialogue, where he argues that words denote concepts that are eternal and exist in the world of ideas. This work is the first to use the word etymology to describe the history of a word's meaning. Around 280 BC, one of Alexander the Great's successors founded a university (see Musaeum) in Alexandria, where a school of philologists studied the ancient texts in and taught Greek to speakers of other languages. While this school was the first to use the word "grammar" in its modern sense, Plato had used the word in its original meaning as "tchn grammatik" ( ), the "art of writing", which is also the title of one of the most important works of the Alexandrine school by Dionysius Thrax. Throughout the Middle Ages, the study of language was subsumed under the topic of philology, the study of ancient languages and texts, practised by such educators as Roger Ascham, Wolfgang Ratke, and John Amos Comenius.
In the 18th century, the first use of the comparative method by William Jones sparked the rise of comparative linguistics. Bloomfield attributes "the first great scientific linguistic work of the world" to Jacob Grimm, who wrote Deutsche Grammatik. It was soon followed by other authors writing similar comparative studies on other language groups of Europe. The study of language was broadened from Indo-European to language in general by Wilhelm von Humboldt, of whom Bloomfield asserts:
This study received its foundation at the hands of the Prussian statesman and scholar Wilhelm von Humboldt (17671835), especially in the first volume of his work on Kavi, the literary language of Java, entitled ber die Verschiedenheit des menschlichen Sprachbaues und ihren Einflu auf die geistige Entwickelung des Menschengeschlechts (On the Variety of the Structure of Human Language and its Influence upon the Mental Development of the Human Race).
20th century developments
There was a shift of focus from historical and comparative linguistics to synchronic analysis in early 20th century. Structural analysis was improved by Leonard Bloomfield, Louis Hjelmslev; and Zellig Harris who also developed methods of discourse analysis. Functional analysis was developed by the Prague linguistic circle and Andr Martinet. As sound recording devices became commonplace in the 1960s, dialectal recordings were made and archived, and the audio-lingual method provided a technological solution to foreign language learning. The 1960s also saw a new rise of comparative linguistics: the study of language universals in linguistic typology. Towards the end of the century the field of linguistics became divided into further areas of interest with the advent of language technology and digitalised corpora.
Areas of research
Ecolinguistics explores the role of language in the life-sustaining interactions of humans, other species and the physical environment. The first aim is to develop linguistic theories which see humans not only as part of society, but also as part of the larger ecosystems that life depends on. The second aim is to show how linguistics can be used to address key ecological issues, from climate change and biodiversity loss to environmental justice.
Sociolinguistics is the study of how language is shaped by social factors. This sub-discipline focuses on the synchronic approach of linguistics, and looks at how a language in general, or a set of languages, display variation and varieties at a given point in time. The study of language variation and the different varieties of language through dialects, registers, and idiolects can be tackled through a study of style, as well as through analysis of discourse. Sociolinguists research both style and discourse in language, as well as the theoretical factors that are at play between language and society.
Developmental linguistics is the study of the development of linguistic ability in individuals, particularly the acquisition of language in childhood. Some of the questions that developmental linguistics looks into is how children acquire different languages, how adults can acquire a second language, and what the process of language acquisition is.
Neurolinguistics is the study of the structures in the human brain that underlie grammar and communication. Researchers are drawn to the field from a variety of backgrounds, bringing along a variety of experimental techniques as well as widely varying theoretical perspectives. Much work in neurolinguistics is informed by models in psycholinguistics and theoretical linguistics, and is focused on investigating how the brain can implement the processes that theoretical and psycholinguistics propose are necessary in producing and comprehending language. Neurolinguists study the physiological mechanisms by which the brain processes information related to language, and evaluate linguistic and psycholinguistic theories, using aphasiology, brain imaging, electrophysiology, and computer modelling. Amongst the structures of the brain involved in the mechanisms of neurolinguistics, the cerebellum which contains the highest numbers of neurons has a major role in terms of predictions required to produce language.
Linguists are largely concerned with finding and describing the generalities and varieties both within particular languages and among all languages. Applied linguistics takes the results of those findings and "applies" them to other areas. Linguistic research is commonly applied to areas such as language education, lexicography, translation, language planning, which involves governmental policy implementation related to language use, and natural language processing. "Applied linguistics" has been argued to be something of a misnomer. Applied linguists actually focus on making sense of and engineering solutions for real-world linguistic problems, and not literally "applying" existing technical knowledge from linguistics. Moreover, they commonly apply technical knowledge from multiple sources, such as sociology (e.g., conversation analysis) and anthropology. (Constructed language fits under Applied linguistics.)
Today, computers are widely used in many areas of applied linguistics. Speech synthesis and speech recognition use phonetic and phonemic knowledge to provide voice interfaces to computers. Applications of computational linguistics in machine translation, computer-assisted translation, and natural language processing are areas of applied linguistics that have come to the forefront. Their influence has had an effect on theories of syntax and semantics, as modelling syntactic and semantic theories on computers constraints.
Linguistic analysis is a sub-discipline of applied linguistics used by many governments to verify the claimed nationality of people seeking asylum who do not hold the necessary documentation to prove their claim. This often takes the form of an interview by personnel in an immigration department. Depending on the country, this interview is conducted either in the asylum seeker's native language through an interpreter or in an international lingua franca like English. Australia uses the former method, while Germany employs the latter; the Netherlands uses either method depending on the languages involved. Tape recordings of the interview then undergo language analysis, which can be done either by private contractors or within a department of the government. In this analysis, linguistic features of the asylum seeker are used by analysts to make a determination about the speaker's nationality. The reported findings of the linguistic analysis can play a critical role in the government's decision on the refugee status of the asylum seeker.
Semiotics is the study of sign processes (semiosis), or signification and communication, signs, and symbols, both individually and grouped into sign systems, including the study of how meaning is constructed and understood. Semioticians often do not restrict themselves to linguistic communication when studying the use of signs but extend the meaning of "sign" to cover all kinds of cultural symbols. Nonetheless, semiotic disciplines closely related to linguistics are literary studies, discourse analysis, text linguistics, and philosophy of language. Semiotics, within the linguistics paradigm, is the study of the relationship between language and culture. Historically, Edward Sapir and Ferdinand De Saussure's structuralist theories influenced the study of signs extensively until the late part of the 20th century, but later, post-modern and post-structural thought, through language philosophers including Jacques Derrida, Mikhail Bakhtin, Michel Foucault, and others, have also been a considerable influence on the discipline in the late part of the 20th century and early 21st century. These theories emphasize the role of language variation, and the idea of subjective usage, depending on external elements like social and cultural factors, rather than merely on the interplay of formal elements.
Language documentation combines anthropological inquiry (into the history and culture of language) with linguistic inquiry, in order to describe languages and their grammars. Lexicography involves the documentation of words that form a vocabulary. Such a documentation of a linguistic vocabulary from a particular language is usually compiled in a dictionary. Computational linguistics is concerned with the statistical or rule-based modeling of natural language from a computational perspective. Specific knowledge of language is applied by speakers during the act of translation and interpretation, as well as in language education the teaching of a second or foreign language. Policy makers work with governments to implement new plans in education and teaching which are based on linguistic research.
Since the inception of the discipline of linguistics, linguists have been concerned with describing and analysing previously undocumented languages. Starting with Franz Boas in the early 1900s, this became the main focus of American linguistics until the rise of formal linguistics in the mid-20th century. This focus on language documentation was partly motivated by a concern to document the rapidly disappearing languages of indigenous peoples. The ethnographic dimension of the Boasian approach to language description played a role in the development of disciplines such as sociolinguistics, anthropological linguistics, and linguistic anthropology, which investigate the relations between language, culture, and society.
The emphasis on linguistic description and documentation has also gained prominence outside North America, with the documentation of rapidly dying indigenous languages becoming a primary focus in many university programmes in linguistics. Language description is a work-intensive endeavour, usually requiring years of field work in the language concerned, so as to equip the linguist to write a sufficiently accurate reference grammar. Further, the task of documentation requires the linguist to collect a substantial corpus in the language in question, consisting of texts and recordings, both sound and video, which can be stored in an accessible format within open repositories, and used for further research.
The sub-field of translation includes the translation of written and spoken texts across media, from digital to print and spoken. To translate literally means to transmute the meaning from one language into another. Translators are often employed by organizations such as travel agencies and governmental embassies to facilitate communication between two speakers who do not know each other's language. Translators are also employed to work within computational linguistics setups like Google Translate, which is an automated program to translate words and phrases between any two or more given languages. Translation is also conducted by publishing houses, which convert works of writing from one language to another in order to reach varied audiences. Academic translators specialize in or are familiar with various other disciplines such as technology, science, law, economics, etc.
Clinical linguistics is the application of linguistic theory to the field of speech-language pathology. Speech language pathologists work on corrective measures to treat communication and swallowing disorders.
Chaika (1990) showed that people with schizophrenia who display speech disorders like rhyming inappropriately have attentional dysfunction, as when a patient was shown a color chip and then asked to identify it, responded "looks like clay. Sounds like gray. Take you for a roll in the hay. Heyday, May Day." The color chip was actually clay-colored, so his first response was correct.'
However, most people suppress or ignore words which rhyme with what they've said unless they are deliberately producing a pun, poem or rap. Even then, the speaker shows connection between words chosen for rhyme and an overall meaning in discourse. People with schizophrenia with speech dysfunction show no such relation between rhyme and reason. Some even produce stretches of gibberish combined with recognizable words.
Computational linguistics is the study of linguistic issues in a way that is "computationally responsible", i.e., taking careful note of computational consideration of algorithmic specification and computational complexity, so that the linguistic theories devised can be shown to exhibit certain desirable computational properties and their implementations. Computational linguists also work on computer language and software development.
Evolutionary linguistics is the study of the emergence of the language faculty through human evolution, and also the application of evolutionary theory to the study of cultural evolution among different languages. It is also a study of the dispersal of various languages across the globe, through movements among ancient communities. Evolutionary linguistics is a highly interdisciplinary field, including linguists, biologists, neuroscientists, psychologists, mathematicians, and others. By shifting the focus of investigation in linguistics to a comprehensive scheme that embraces the natural sciences, it seeks to yield a framework by which the fundamentals of language are understood.
Forensic linguistics is the application of linguistic analysis to forensics. Forensic analysis investigates the style, language, lexical use, and other linguistic and grammatical features used in the legal context to provide evidence in courts of law. Forensic linguists have also used their expertise in the framework of criminal cases.
- Akmajian, Adrian; Demers, Richard; Farmer, Ann; Harnish, Robert (2010). Linguistics: An Introduction to Language and Communication. Cambridge, MA: The MIT Press. ISBN978-0-262-51370-8.
- Aronoff, Mark; Rees-Miller, Janie, eds. (2000). The handbook of linguistics. Oxford: Blackwell.
- Bloomfield, Leonard (1983) . An Introduction to the Study of Language: New edition. Amsterdam: John Benjamins Publishing. ISBN978-90-272-8047-3.
- Chomsky, Noam (1998). On Language. The New Press, New York. ISBN978-1-56584-475-9.
- Derrida, Jacques (1967). Of Grammatology. The Johns Hopkins University Press. ISBN978-0-8018-5830-7.
- Hall, Christopher (2005). An Introduction to Language and Linguistics: Breaking the Language Spell. Routledge. ISBN978-0-8264-8734-6.
- Isac, Daniela; Charles Reiss (2013). I-language: An Introduction to Linguistics as Cognitive Science, 2nd edition. Oxford University Press. ISBN978-0-19-966017-9.
- Pinker, Steven (1994). The Language Instinct. William Morrow and Company. ISBN978-0-14-017529-5.
- Crystal, David (1990). Linguistics. Penguin Books. ISBN978-0-14-013531-2.
- The Linguist List, a global online linguistics community with news and information updated daily
- Glossary of linguistic terms by SIL International (last updated 2004)
- Glottopedia, MediaWiki-based encyclopedia of linguistics, under construction
- Linguistic sub-fields according to the Linguistic Society of America
- Linguistics and language-related wiki articles on Scholarpedia and Citizendium
- "Linguistics" section A Bibliography of Literary Theory, Criticism and Philology, ed. J.A. Garca Landa (University of Zaragoza, Spain)
- Isac, Daniela; Charles Reiss (2013). I-language: An Introduction to Linguistics as Cognitive Science, 2nd edition. Oxford University Press. ISBN978-0-19-953420-3.
- Linguistics at Curlie
|
This paper describes the effects that the treaty of Versailles had on Germany both socially and economically. The settlement left a mark in 1919 and led to birth of nations and death of empires and most national boundaries in Europe were redrawn. The paper first describe the World War I and the enactment of the Treaty of Versailles. It describes the effects that the treaty had on Germany and the response that the World hard on the harsh conditions put on Germany. The treaty of Versailles was very harsh to Germany, and it stipulated that Germany had to take the blame for; initiating the war, losing all of its colonies, losing major territories in Europe and pay a 6.6 billion repatriation fee. The treaty satisfied the Big Three since they believed in the peace agreement that was going to keep Germany weak. The financial penalties and the reparations put on her showed that her Allies only wanted to bankrupt her. These penalties affected the countries financial situation negatively. The Great War also changed the German culture, influenced literature and the social and economic consequences were felt all over the world.
World War IWorld War I, also referred to as the Great War, was an international conflict that took place between 1914 and 1918 (Showalter). The war was embroiled by Europe nations, the United States and the Middle East. Europe was drawn up in two groups in the 20th century and all super powers looked to gain eminence leading to jealousy and tension across Europe. Before 1914, there was evidence of crisis which was bound to start a serious war (Showalter). The war involved Germany, Hungary, and Turkey against its Allies; Russia, Japan, Italy, Britain and France. The United States was only involved until 1917 (Showalter). During the War, there was unprecedented suffering of human beings in the history of the European people. Most Nations were either indirectly or directly affected by the war. Around 80 million soldiers were deployed from 1913-1918 and most of them either died, became physically challenged or were seriously injured (Showalter). Germany active male population disappeared in the war, and it is estimated that around 6 million civilians lost their lives because of the war induced causes. The birth rates also reduced during World War I (Showalter). The war took four years and ended in 1918 with the central powers defeat. This led to the fall of Turkey, Hungary, Russia, Turkey and Germany leading to destabilization of the European Society.
The End of the Great WarThe Great War came as a surprise to the world. Technology had never been in such destructive ways. War had never happened at the same time in the world history. Although the shootings and bombings became silent in 1918, the effect of the war has always been felt today (Obsbawm and Eric 45). There were the birth of nations and death of empires and most national boundaries in Europe were redrawn. This brought about the prosperity of some countries while it a depression in the economic conditions of other nations. The Great War changed the culture and influenced literature. The political social and economic consequences were felt all over the world.
The Treaty of Versailles
The Treaty of Versailles is a peace settlement that was signed by Germany and Allies after the World War I was over in 1918 (Boemeke, Manfred and Gerald), in the shadow of the revolution in Russia and other events. The agreement was signed in Germany at the Versailles Palace; a place seen as an appropriate because of its size. The conference happened at a time that there was unprecedented ideological, economic, social and political upheaval. The Versailles treaty was denounced by Germany since they felt betrayed by making peace. The Treaty of Versailles terms were so harsh making it difficult to be enforced in the European countries. The political environment played an essential role in the Allies inability to agree to the lasting peace. The instability in Europe made it difficult for a lasting peace to be attained.
The aim of the Treaty of Versailles was to bring and maintain an everlasting stability in Europe (Atkinson). However, most leaders saw the goals were not able to win the peace, however, to some the treaty was just a resolution for 20 years (Atkinson). France suffered terribly during the war, so during the time of signing the Treaty of Versailles, their representatives pledged to make Germany pay for the damages they underwent. The treaty of Versailles was very harsh to Germany, and it stipulated that Germany had to take the blame for; initiating the war, losing all of its colonies, losing major territories in Europe and pay a 6.6 billion repatriation fee (Shephard). The war guilt clause of the Treaty of Versailles was therefore, fully applied. It has that Germany who started the war as described in Clause 231; it was responsible for paying for the damages that resulted in the war (Holocaust). Therefore, Germany had to pay reparation which majorly went to Belgium and France to pay for the infrastructure and buildings that were destroyed during the war. The payment was in the form of cash. The Germans hated the peace treaty, and their politicians tried to change the conditions and terms of the settlement in the 1920s. However, Nazis and Hitler got support and vowed to transform the treaty of the Versailles. However, the treaty satisfied the Big Three since they believed in the peace agreement that was going to keep Germany weak so as to stop communism, the creation of unions that would end any form of war and keep the border of France safe from attack by Germans. This only left anger all over Germany as they felt that they had been treated unfairly. Germany hated the fact that it was being blamed for instigating the war and there resulting penalty the treaty imposed on her.
The effort of Allies powers in marginalizing Germany via the Versailles treaty isolated and undermined the democratic rights of the German people. The deleterious harsh provisions of the Treaty was a revelation among many people that Germany had been stabbed in the back. It was not right for Germany to take the full responsibility for reparation because it is the one who initiated the war. It was the anger of the German people which made them unite under the leadership of Hitler leading to the upsurge of Nazi leadership in Germany. Hitler exploited the Treaty of the Versailles to gain more support and it is the treaty that today is blamed for the rise of Hitler in power. The treaty was the Allies plan to keep Germany weak and did this by ruining it economically, geographically and politically and putting restriction to her military power, making sure that the nation will undergo many problems for it to recover. The treaty made many people suffer as a million jobs were lost making a most family to leave the poverty lines. The German people were being punished for a war they started and fought. The Allies managed to ensure that reviving the economy in Germany will be a major problem and may take many years, thereby alleviating any threat that could be caused by Germany within their borders. The citizens in Germany felt that they were paying for their government mistake as it was the government that declared war in 1914, not the people. Therefore, this paper seek to describe the treaty of Versailles and the social and economic impacts it had on Germany.
Economic Impact Reparations
The Treaty of Versailles made Germany responsible for the damages caused during the World War I. Reparations lead to adverse damage on the German economy. The Treaty of Versailles provided that Germany must pay for all the damages that occurred during the war. It is only Germany that was accountable for what was destroyed during the war and the treaty provided that his Allies were to be paid the compensation. A sum estimated to be around 6,600 million, which was to be paid in instalments, till 1984 (Zapotoczny). This figure was agreed upon by Germany Allies in 1921 (Malckom). The might of the German economy had already been destroyed during the war and the reparations further stretched it to the limit, and it had no option but to reconstruct its economy while paying reparations. These reparations destroyed the economy in Germany and resulted in the hyperinflation that took place in 1923 (Shephard). Such reparations crippled a defeated nation economy leading to Dawes plan to see the need of collaborating with Allies for it to revitalize its economy in 1924.
The Treaty of Versailles put many restrictions on Germany leading to financial ruin and putting Germany in an in a hyper-inflation state. For example, the treaty provided that Germany was not allowed to have a particular number of soldiers (100,000 troops), a specific number of ships and other restrictions, especially in the military (History). The reparations placed on Germany led to skyrocket inflation making even an average person in Germany not to be able to afford to live. Germany lost most of her raw material sources since most of her territories and colonies that provided income became seeded in other states, for example, France (Ruth). Some industrial territories that were lost by Germany were the Upper Silesia and Saar (Keynes and Maynard), which contributed immensely to the decline in Germans economy. The financial penalties and the reparations put on her showed that her Allies only wanted to bankrupt her. After the war, Germany was not able to export or import goods (Holocaust). This affected trade tremendously as natural resources and the food was used to pay reparations. Therefore, by 1919 Germany was among the least advanced economies around the world. These are the reasons why the economy of Germany was difficult to cope.
The Versatile treaty also affected the importation and exportation of ships and weapons that were a tremendous problem for the German economy as previously the economy was majorly built on the exportation and production of arms (Keynes and Maynard). The terms forced it to surrender almost 90% of their railroad cars and merchant fleet that means that Germany could not major on trading (Malckom). The Industrial Production section was restricted and forbid any commercial contract between a trading country with Germany and the Allied markets were inaccessible. However, the Allies crippled the markets in Germany because of their preferred status. This destroyed the economy and the country only had to rely on their local markets for trade and the little exports they were allowed to continue with. Germany produced almost 258,800,000 to...
If you are the original author of this essay and no longer wish to have it published on the SuperbGrade website, please click below to request its removal:
- Factors that Contributed to Andrew Jackson Becoming the Senior Commander
- Corporal Punishment: Negative Implications of the Practice
- Statistics of a Research Based on Drinking Rates Among College Students
- Rhetorical Analysis of the Article All the Single Ladies
- Narrative Essay on Tantalizing Situations
- Essay on Educational Opportunities to Both Sexes
- Essay on Letter to Mother
|
The closest place in the universe where extraterrestrial life might exist is Mars, and human beings are poised to attempt to colonize this planetary neighbor within the next decade. Before that happens, we need to recognize that a very real possibility exists that the first human steps on the Martian surface will lead to a collision between terrestrial life and biota native to Mars.
If the red planet is sterile, a human presence there would create no moral or ethical dilemmas on this front. But if life does exist on Mars, human explorers could easily lead to the extinction of Martian life. As an astronomer who explores these questions in my book "Life on Mars: What to Know Before We Go," I contend that we Earthlings need to understand this scenario and debate the possible outcomes of colonizing our neighboring planet in advance. Maybe missions that would carry humans to Mars need a timeout.
Where life could be
Life, scientists suggest, has some basic requirements. It could exist anywhere in the universe that has liquid water, a source of heat and energy, and copious amounts of a few essential elements, such as carbon, hydrogen, oxygen, nitrogen and potassium.
Mars qualifies, as do at least two other places in our solar system. Both Europa, one of Jupiter's large moons, and Enceladus, one of Saturn's large moons, appear to possess these prerequisites for hosting native biology.
I suggest that how scientists planned the exploratory missions to these two moons provides valuable background when considering how to explore Mars without risk of contamination.
Below their thick layers of surface ice, both Europa and Enceladus have global oceans in which 4.5 billion years of churning of the primordial soup may have enabled life to develop and take root. NASA spacecraft have even imaged spectacular geysers ejecting plumes of water out into space from these subsurface oceans.
To find out if either moon has life, planetary scientists are actively developing the Europa Clipper mission for a 2020s launch. They also hope to plan future missions that will target Enceladus.
Taking care to not contaminate
Since the start of the space age, scientists have taken the threat of biological contamination of other worlds seriously. As early as 1959, NASA held meetings to debate the necessity of sterilizing spacecraft that might be sent to other worlds. Since then, all planetary exploration missions have adhered to sterilization standards that balance their scientific goals with limitations of not damaging sensitive equipment, which could potentially lead to mission failures. Today, NASA protocols exist for the protection of all solar system bodies, including Mars.
Since avoiding the biological contamination of Europa and Enceladus is an extremely well-understood, high-priority requirement of all missions to the Jovian and Saturnian environments, their moons remain uncontaminated.
NASA's Galileo mission explored Jupiter and its moons from 1995 until 2003. Given Galileo's orbit, the possibility existed that the spacecraft, once out of rocket propellant and subject to the whims of gravitational tugs from Jupiter and its many moons, could someday crash into and thereby contaminate Europa.
Such a collision might not occur until many millions of years from now. Nevertheless, though the risk was small, it was also real. NASA paid close attention to guidance from the National Academies' Committee on Planetary and Lunar Exploration, which noted serious national and international objections to the possible accidental disposal of the Galileo spacecraft on Europa.
To completely eliminate any such risk, on Sept. 21, 2003, NASA used the last bit of fuel on the spacecraft to send it plunging into Jupiter's atmosphere. At a speed of 30 miles per second, Galileo vaporized within seconds.
Fourteen years later, NASA repeated this protect-the-moon scenario. The Cassini mission orbited and studied Saturn and its moons from 2004 until 2017. On Sept. 15, 2017, when fuel had run low, on instructions from NASA Cassini's operators deliberately plunged the spacecraft into Saturn's atmosphere, where it disintegrated.
But what about Mars?
Mars is the target of seven active missions, including two rovers, Opportunity and Curiosity. In addition, on Nov. 26 NASA's InSight mission is scheduled to land on Mars, where it will make measurements of Mars' interior structure. Next, with planned 2020 launches, both ESA's ExoMars rover and NASA's Mars 2020 rover are designed to search for evidence of life on Mars.
The good news is that robotic rovers pose little risk of contamination to Mars, since all spacecraft designed to land on Mars are subject to strict sterilization procedures before launch. This has been the case since NASA imposed "rigorous sterilization procedures" for the Viking Lander Capsules in the 1970s, since they would directly contact the Martian surface. These rovers likely have an extremely low number of microbial stowaways.
Any terrestrial biota that do manage to hitch rides on the outside of those rovers would have a very hard time surviving the half-year journey from Earth to Mars. The vacuum of space combined with exposure to harsh X-rays, ultraviolet light and cosmic rays would almost certainly sterilize the outsides of any spacecraft sent to Mars.
Any bacteria that sneaked rides inside one of the rovers might arrive at Mars alive. But if any escaped, the thin Martian atmosphere would offer virtually no protection from high energy, sterilizing radiation from space. Those bacteria would likely be killed immediately. Because of this harsh environment, life on Mars, if it currently exists, almost certainly must be hiding beneath the planet's surface. Since no rovers have explored caves or dug deep holes, we have not yet had the opportunity to come face-to-drill-bit with any possible Martian microbes.
Given that the exploration of Mars has so far been limited to unmanned vehicles, the planet likely remains free from terrestrial contamination.
But when Earth sends astronauts to Mars, they'll travel with life support and energy supply systems, habitats, 3D printers, food and tools. None of these materials can be sterilized in the same ways systems associated with robotic spacecraft can. Human colonists will produce waste, try to grow food and use machines to extract water from the ground and atmosphere. Simply by living on Mars, human colonists will contaminate Mars.
Can't turn back the clock after contamination
Space researchers have developed a careful approach to robotic exploration of Mars and a hands-off attitude toward Europa and Enceladus. Why, then, are we collectively willing to overlook the risk to Martian life of human exploration and colonization of the red planet?
Contaminating Mars isn't an unforeseen consequence. A quarter century ago, a National Research Council report entitled "Biological Contamination of Mars: Issues and Recommendations" asserted that missions carrying humans to Mars will inevitably contaminate the planet.
I believe it's critical that every attempt be made to obtain evidence of any past or present life on Mars well in advance of future missions to Mars that include humans. What we discover could influence our collective decision whether to send colonists there at all.
Even if we ignore or don't care about the risks a human presence would pose to Martian life, the issue of bringing Martian life back to Earth has serious societal, legal and international implications that deserve discussion before it's too late. What risks might Martian life pose to our environment or our health? And does any one country or group have the right to risk back contamination if those Martian lifeforms could attack the DNA molecule and thereby put all of life on Earth at risk?
But players both public – NASA, United Arab Emirates' Mars 2117 project – and private – SpaceX, Mars One, Blue Origin – already plan to transport colonists to build cities on Mars. And these missions will contaminate Mars.
Some scientists believe they have already uncovered strong evidence for life on Mars, both past and present. If life already exists on Mars, then Mars, for now at least, belongs to the Martians. Mars is their planet, and Martian life would be threatened by a human presence there.
Does humanity have an inalienable right to colonize Mars simply because we will soon be able to do so? We have the technology to use robots to determine whether Mars is inhabited. Do ethics demand that we use those tools to answer definitively whether Mars is inhabited or sterile before we put human footprints on the Martian surface?
David Weintraub, Professor of Astronomy, Vanderbilt University
This article is republished from The Conversation under a Creative Commons license. Read the original article. Follow all of the Expert Voices issues and debates — and become part of the discussion — on Facebook, Twitter and Google +. The views expressed are those of the author and do not necessarily reflect the views of the publisher.
|
Slope Practice Worksheets With Answers. Since a vertical line goes straight up and down its slope is undefined. This practice resource is ideal for 7th grade and 8th grade students.
Point slope form student practice worksheet answers. We have provided several different tips for. These are in html format.
On These Printable Worksheets, Students Are Given Ordered Pairs Or A Graph And Are Instructed To Find The Slope.
Some of the worksheets for this concept are point slope form practice work infinite algebra 1 practice point slope form answers name model practice challenge problems vi m117 name chapter 5 teacher work 2 date hour what equation of a line y o u r c o m p l e t e f r e e. Free worksheet pdf and answer key on point slope form equation of a line. Convert the given equations into slope intercept form y mx b and write them down.
Slope Of A Line Worksheet Pdf And Answer Key.
This practice resource is ideal for 7th grade and 8th grade students. This practice resource is ideal for 7th grade and 8th grade students. All worksheets created with infinite.
Slope Worksheet And Activity I.
Slope practice worksheets with answers. Slope of a line worksheet pdf and answer key. Text slope frac y 2 y 1 x 2 x 1 how it works.
On These Printable Worksheets Students Are Given Ordered Pairs Or A Graph And Are Instructed To Find The Slope.
Writing graphing linear equations in all forms given the slope and a point graphing linear equations writing linear equations graphing quadratics. Convert to slope intercept form. 6 activities for teaching slope.
4.4 Infinite Or Unidentified Slope.
Practice problems student practice calculating slope (answers to problems online here) v. Write equations in point slope form given two pairs of values and convert the equation into slope intercept form. Knowledge of relevant formulae is a must for students of grade 6 through high school to solve some of these pdf worksheets.
|
Free equation worksheets
With this worksheet generator, you can make customizable worksheets for linear equations (first-degree equations). These worksheets are especially meant for pre-algebra and algebra 1 courses (grades 6-9).
You can choose from SEVEN basic types of equations, ranging from simple to complex, explained below (such as one-step equations, variable on both sides, or having to use the distributive property).
All of the worksheets come with an answer key; however, you need to click the link to the answer key immediately after generating the worksheet, because the answer key is only generated when you click on that link. Because of this, you cannot find the answer key for a specific worksheet later on, should you come looking for it.
Please use the quick links below to generate some common types of equation worksheets.
One-step equations, whole numbers, with no negative numbers involved
One-step equations, whole numbers, the root may be a negative number
One-step equations; involves negative integers
Variable on both sides and includes parenthesis
Challenges; includes rational expressions, such as (x - 5)/6, within the equations
Equation Worksheet Generator
Font: Font Size:
|Download 300+ free QUALITY math worksheets|
Key to Algebra offers a unique, proven way to introduce algebra to your students. New concepts are explained in simple language, and examples are easy to follow. Word problems relate algebra to familiar situations, helping students to understand abstract concepts. Students develop understanding by solving equations and inequalities intuitively before formal solutions are introduced. Students begin their study of algebra in Books 1-4 using only integers. Books 5-7 introduce rational numbers and expressions. Books 8-10 extend coverage to the real number system.
|
from a handpicked tutor in LIVE 1-to-1 classes
A dot plot is used to encode data in a dot or small circle. The dot plot is shown on a number line that displays the distribution of numerical variables where a value is defined by each dot.
|1.||What is Dot Plot?|
|2.||Types of Dot Plot|
|3.||How to Make a Dot Plot?|
|4.||FAQs on Dot Plot|
What is Dot Plot?
A dot plot is used to represent any data in the form of dots or small circles. It is similar to a simplified histogram or a bar graph as the height of the bar formed with dots represents the numerical value of each variable. Dot plots are used to represent small amounts of data. For example, a dot plot can be used to collect the vaccination report of newborns in an area, which is represented in the following table.
|Number of babies vaccinated||7||3||5||1|
Now let's see the number of newborn babies who got a vaccine in each colony. Colony A has a total of 7 dots, which means that seven babies have been vaccinated. Similarly, colony B has three babies, colony C has five babies, and colony D has one baby who has been vaccinated. The other way to represent it through a dot plot is given below:
Types of Dot Plot
There are two types of dot plot: Wilkinson dot plot and Cleveland dot plot.
Wilkinson Dot Plot
The Wilkinson dot plot represents the distribution of continuous data in the form of individual dots for each value. For example, if 10 students like math it is represented by 10 dots on a dot plot. In the above example of the number of kids vaccinated, the first graph showing 7 dots for colony A, 3 dots for colony B, etc is an example of a Wilkinson dot plot.
Cleveland Dot Plot
The Cleveland dot plot is a good alternative to a simple bar map if you have more than a few elements. It doesn't take much to look cluttered on a bar map. Many more values can be used in a dot plot in the same amount of space, and it's also simpler to read. This type of plot is similar to a bar chart but uses a location instead of the length of the bar formed by multiple dots. Just like how the height of the bar chart represents the number of items, the position of the dot on the number line or on the graph represents the number of items for that category. In the above example of vaccinated children in 4 colonies, the second graph showing only one dot for each colony is an example of a Cleveland dot plot.
How to Make a Dot Plot?
There are many ways to draw a dot plot but the best way is to draw it by hand. Let's understand with the help of an example. The data which is given below shows the number of books read by a number of kids during last summer holidays.
|No.of books read||0||1||2||3||4||5||6||7||8||9|
|No. of Kids||4||6||3||1||2||1||0||1||1||1|
Follow the steps given below to make a dot plot:
Step 1: Select a scale and set it up.
Step 2: Plot the dots.
In this step, you will begin to fill in the dots using your scale for this stage. Keep in mind that "Each value gets a dot & dots are stacked”. To check this, let’s start by plotting only the first row of data. You can see that each value is defined by a dot on the plot and that we have 4 dots set on top of each other. Because there are 4 kids who read 0 books. Now, with the rest of the details, we can continue the operation. Keep in mind that while you can't do this perfectly by hand, you should try to make sure that the dots mostly match up. You do not want broad gaps between dots to make one value seem more prevalent than another.
That's really it! The dot plot is a great plot to use precisely because it is so easy to create and read for the representation of data. Note that our purpose is also to convey knowledge to others in statistics. The faster we are able to do this, the better.
Given below are some of the important notes related to the dot plot. Have a look!
- The dot plot shows the distribution of numerical variables where a value is represented by each dot on a number line or on graph.
- The distribution of continuous data points, such as a histogram, is shown in the Wilkinson dot diagram that displays individual data points rather than bins.
- The Cleveland dot plot shows graphical data elements and displays a continuous variable versus a categorical variable.
Related Articles on Dot Plot
Check out these interesting articles to know more about dot plot and its related topics.
Dot Plot Examples
Example 1: The following dot plot illustrates each student's essay score in Mr. Jhonson's class. A different student is represented by each dot. What was the minimum essay score earned by a student and what is the score earned by the maximum number of students?
Solution: So as per the above data represented in the dot plot, shows the data of the number of students who received scores for essays on a 6-point scale.
- The minimum essay score that a student received is 2 points.
- Four students earned 3 marks, which is the score earned by the maximum number of students.
Thus, the minimum essay score that a student received is 2 points and 3 is the marks earned by the maximum number of students.
Example 2: The following dot plot shows the height of each toddler at Mrs. Bell's daycare. Each dot represents a different toddler. What is the height of the shortest toddler?
Solution: The range on the axis is from 80 - 86. No toddler is having height 80 and 81. There are two dots at bin 82, which means there are 2 toddlers whose height is 82 units. Therefore, the height of the shortest toddler is 82 units.
Example 3: The number of hours that students did on homework in one week was recorded in the frequency table below. Draw a dot plot for the information given.
Day of the week Number of hours of homework 1 (Monday) 4 2 (Tuesday) 5 3 (Wednesday) 8 4 (Thursday) 8 5 (Friday) 5 6 (Saturday) 4 7 (Sunday) 3
Solution: Dot plot is represented below.
FAQs on Dot Plot
Where Dot Plot is Used for?
The dot plot is one of the types of graphical representation of data on a number line or a graph. It is commonly used when data is very small. It can be used to convey important information to the viewer or it can be used in schools to display any data. Dot plots are easy to draw and easy to read so they can be used in most places to display information. They are useful for highlighting clusters and gaps.
What is the Difference Between a Line Plot and a Dot Plot?
Line plots use the lines drawn to indicate the numerical value of each category in a data set whereas dot plot uses dots to show the amount of each number on a number line.
How do you Find the Mode on a Dot Plot?
In the case of a dot plot, the bin or container having the maximum number of dots is the mode of the given set.
Why would you use a Dot Plot instead of a Histogram?
In a dot plot we can illustrate or define each individual data observation but in a histogram data are grouped into classes and then plotted.
How do you Describe a Dot Plot?
The dot plot is a visual representation of a number line that shows the value that occurs a number of times in data using dots. Dot plots show peaks, and gaps in a data set.
What are Elements in Dot Plots?
The elements of dot plots for small data sets are:
- Graph filled with dots
- A scale to compare the frequency within categories
When would you use a Dot Plot to Represent Data?
Dot plots are used for continuous quantitative data. Data points are labeled with dots on the number line. It is one of the easiest statistical plots and is fitted for small to medium-sized data sets.
|
In August 1872, a 34-year-old John Muir climbed the snow and ice of Mount Lyell and Mount Maclure into the highest reaches of what is today Yosemite National Park. The journey to the high country was no pleasure trip, but an expedition intended to resolve a bitter scientific dispute. The climb, chronicled in “The Living Glaciers of California,” published in the November 1875 issue of Harper’s Magazine, would hold great geological significance as Muir gathered evidence for the formation of the Sierra Nevada’s distinctive granite valleys.
At the time, no one had collected any evidence to suggest that the permanent ice and snowfields in the Sierra’s high basins were “living” glaciers. Muir believed they were. He posited that in a distant, colder past, these small glaciers once ran like great rivers of ice, carving the granite canyons of the western Sierra, including the majestic defile of Yosemite Valley itself.
Muir’s most outspoken intellectual opponent was Josiah Whitney, an eminent geologist who derided the Scotsman as an “ignoramus” for straying into a field in which he possessed no formal training. Whitney had a competing hypothesis. He believed Yosemite Valley had been created during a cataclysmic earthquake in which the massive granite uplift had been shaken violently – like a rising cake jostled in the oven – forming a deep furrow through its midsection.
Muir was undeterred. On August 21, 1872, he ascended the snowfield on the northern shoulder of Mount Maclure, joined by Galen Clark (the first appointed “guardian” of Yosemite) and University of California professor Joseph LeConte. The group hauled bulky stakes hewn from whitebark pine and drove them five feet into the ice. The experiment was simple but elegant. Using a plumb bob rigged from a strand of horsehair and a stone, Muir surveyed the stakes, making sure they were in a straight line. Muir would return to the spot the first week of October. If the stakes had moved, he would have evidence that the patch of ice was not merely a permanent snowfield but a “living” glacier, pulled downhill by gravity and, in the process, gouging out the mountain below.
When Muir returned to Mount Maclure on October 6, 1872, he found that all his stakes had moved. One marker had traveled less than a foot, but several others slid nearly four feet. By his reckoning, the stakes that showed the greatest displacement had been moving downhill at a clip of one inch per twenty-four hours.
Muir’s scientific feud would drag on for several years after his discovery on the Maclure. Nearly a century and a half later, the debate mostly has been settled. Geologists agree that the granite of Yosemite’s famed cliffs was pushed up from great chambers of magma beneath Earth’s surface and, over 50 million years, Yosemite Valley was simultaneously uplifted and cut by the Merced River. Then, around two to three million years ago, Earth’s climate rapidly cooled. The small glaciers of the Sierra became massive ploughs of ice, 2,000 feet thick. Over the next 750,000 years the glaciers would become the great shaping forces of Yosemite.
Today, new forces are shaping the Sierra, California, and the entire planet. In the 140-plus years since Muir came down from the mountain, the Golden State has grown from 560,000 to 37 million people, and 4 million visitors arrive at Yosemite Valley annually, the vast majority of them in automobiles spewing CO2. Global carbon dioxide concentrations have jumped from 280 parts per million in 1850 to 400 parts per million today – warming the planet and, in the process, thawing the mountain snowfields.
These mid-latitude mountain glaciers, even more than their Arctic cousins, are powerful indicators of global climate change. They also signal serious regional consequences, namely decline in the snowpack, the lifeblood of the state’s two heavily engineered river systems: the Sacramento and San Joaquin. Now that lifeblood is draining away as the Sierra’s ancient ice formations melt.
The task of measuring Yosemite’s fading glaciers has fallen to geologist Greg Stock. Unlike the bearded, self-taught geologist Muir, the rangy, youthful 40-year-old goes clean-shaven and holds a PhD in earth sciences from the University of California-Santa Cruz. Stock grew up in the town of Murphys, on the national park’s southern fringe, and spent much of his time exploring the region’s numerous caves. These days his work has pulled him upward, to the ailing ice formations atop the Range of Light. Stock recounts a recent hiking trip taken with his daughter through the Yosemite backcountry, a reminder that his research holds generational significance. “It’s hard to believe, but the glaciers may be gone within her lifetime,” he says.
Stock’s glacier research began with an accidental discovery made nearly 30 years earlier by Pete Devine, one of Yosemite’s leading naturalists, during a mid-80s ascent of Mt. Lyell. As he made his way over a rugged ridgeline, he came upon a conspicuous letter “K” and a circle inscribed on the rock in orange paint. Devine suspected the marks were related to glacier research – a hunch confirmed after a visit to the park’s research library. The “K” and a corresponding “L” on the other side of the cirque were survey points established in the 1930s right at the edge of the ice.
Devine also stumbled upon a trove of lost data: glacier surveys conducted by park naturalists between 1931 and 1975. The surveys, Devine says, succinctly describe the glacial retreat, though without the context of climate change. “There’s not that sense of alarm we hear today,” Devine says. “The tone of the reports is much more, ‘We saw this. We measured this.’”
Shortly after Stock was hired in 2006, Devine showed him the reports and the Yosemite glacier surveys were reborn. Last July and September, I had a chance to participate in two of these surveys, joining Stock, his colleague Bob Anderson, a glacier researcher from the University of Colorado, and several volunteers on two trips to the Lyell and Maclure Glaciers, among the last in a four-year study began in 2009.
One of Stock’s main objectives was to diagnose the condition of the Lyell, Yosemite’s largest and most iconic glacier, which has lost so much ice that Stock suspects it has stopped moving altogether – which is to say, it may no longer be a glacier.
Another of Stock’s investigations would replicate John Muir’s 1872 experiment on the Maclure Glacier to determine whether it is still a “living” glacier, as Muir proclaimed 140 years earlier. Or has relentless melting taken its toll, reducing it to a stagnant patch of ice – a “dead” glacier?
We would find out.
We set out on a glorious day in late September through Tuolumne Meadows and toward the high peaks above. Before beginning, we divvy up a high-tech array of gear – three-foot lengths of PVC pipe, temperature sensors, tripods, titanium ice augers, collapsible shovels, GPS receivers, laser surveying equipment – a collection that little resembles Muir’s makeshift equipment.
Within minutes, we cross a bridge over Rafferty Creek, barely a trickle in its rocky bed. Stock says this illustrates a key hydrological characteristic of the Sierra Nevada. Though Rafferty Creek originates in an alpine basin, it lacks a glacier or permanent snowfield at its source and goes dry in late summer. “The glaciers act like buffers through the dry months,” Stock says, pointing out that glacier-fed waterways provide an invaluable resource to plants and animals, not to mention parched backpackers.
Is the Maclure still a “living” glacier, as John Muir proclaimed? We would find out.
We press on through the constricting canyon as the temperature climbs. Unlike dry Rafferty Creek, the Tuolumne River runs clear and cold, filled with meltwater from the glaciers above. About 35 miles downstream lies Hetch Hetchy Reservoir, the massive water-engineering project that Muir tried unsuccessfully to halt in the early 1900s. The Tuolumne was dammed and the great granite canyon – which Muir proclaimed the scenic equal of Yosemite Valley – was inundated to supply drinking water to San Francisco. Today Hetch Hetchy supplies water to 2.4 million people across the Bay Area.
A few miles farther on, we come to a bend in the trail marked by a cairn. We plod into the frigid, thigh-deep Tuolumne, balancing atop slick slabs of submerged granite to another cairn concealed in ankle-high grasses. At the head of the valley, 13,114-foot Mount Lyell, Yosemite’s highest point, thrusts upward, its namesake glacier slathered like frosting over the mountain’s swooping ramparts. (The mountain and its glacier are named for famed nineteenth-century geologist Charles Lyell, friend and colleague of Charles Darwin, who, ironically, was resistant to the idea of “ice ages” when it was first introduced in the 1830s.)
Even from a distance, it’s easy to see how much the Lyell has changed since geologist Israel C. Russell photographed it in 1883 from this very spot. Once an unbroken curtain of ice, the glacier has split into two separate lobes. Stock relays the findings of Hassan Basagic, a glacier researcher from Portland State University who found that the west lobe of the Lyell has lost about 30 percent of its area, the east lobe, 70 percent. “But it’s not the area that matters as much as the volume,” says Stock. “From our models, the volume loss is more like 80 percent.”
Contrary to common thinking, the glaciers of the Sierra Nevada are not holdovers from the Pleistocene, or Ice Age, an epoch that began 2.6 million years ago and ended 11,000 years ago as Earth entered its current period of warming. Today’s glaciers are remnants of the so-called Little Ice Age, an intervening period of cooling lasting from roughly 1350 to 1850. Many possible causes for the Little Ice Age have been offered, including altered ocean circulation patterns, weakening solar radiation, and volcanic eruptions that released vast plumes of sunlight-blocking ash and sulfate high into the atmosphere.
During Muir’s visit to the Lyell and Maclure in the 1870s, the ice would have been near its maximum extent from the Little Ice Age glaciation. Since then the melting has been rapid and unrelenting. According to Andrew Fountain, a geology professor at Portland State, the surface area covered by the Sierra’s roughly 1,700 glaciers and permanent snowfields has dwindled by about 55 percent and now covers a mere 46 square miles. Glaciers throughout the American West and worldwide have experienced a similar ebb. In the Colorado Rockies, glaciers and snowfields have lost 42 percent of their area; the Cascades, 48 percent; and Glacier National Park, 66 percent. According to the National Oceanic and Atmospheric Administration, alpine glaciers around the globe have been in a state of “negative mass balance” – a condition in which melting exceeds snow accumulation – for 21 years. NOAA also reports that the world’s alpine glaciers have lost, on average, 50 feet of thickness since 1980.
The date of the start of this mass retreat is conspicuous – 1850 or thereabouts, the moment when humanity began generating huge quantities of carbon dioxide by burning coal to power the steam engines of the Industrial Revolution. Muir, too, saw the fingerprint of warming and understood it to be part of a larger global trend. “Every glacier in the world is smaller than it once was,” Muir wrote. “All the world is growing warmer, or the crop of snow flowers is diminishing.”
We rise at dawn from our camp at a beautiful timberline lake and begin our climb, ascending to a high granite spine that will carry us to the glacial cirques of Mount Lyell and Maclure. Stock is tall and lean and walks with economy but a sense of urgency. “Just over this rise and we’ll get a view,” he says, gripping the straps of his daypack from which several lengths of white PVC pipe jut like arrows from a quiver.
We press upward over granite slabs cleaved into massive rectangular flakes. Other sections are polished smooth or covered in “chatter marks,” deep scars left by boulders raked across the surface by the glacier that once covered this slope. Over a small rise, we encounter a strange, spaceship-like array of solar panels, temperature probes, and wind speed gauges affectionately known by the team as the “met station.”
During the last four years, this array of instruments has collected climate data on the 11,500-foot-high ridge. While snowfall is highly variable year-to-year, average precipitation in the Sierra has remained virtually unchanged since record-keeping began more than 125 years ago. The temperature records, however, tell a different story. During the last century, California has experienced a one-degree Celsius rise in average temperature. Even the high country has not been immune to this uptick. In summers past, nighttime temperatures would often drop below freezing. These days, the lows rarely drop that far, and between June and September the glaciers are in a state of near-constant melting. As the amount of snow cover decreases, darker bedrock is exposed, absorbing sunlight and reradiating additional heat into the alpine bowl.
Meltwater from the Lyell and Maclure feeds directly into the Lyell Fork of the Tuolumne River, the main artery to Hetch Hetchy Reservoir. Stock estimates the two glaciers hold about 20 acre-feet, a mere teardrop in the 360,000-acre-foot bathtub of Hetch Hetchy Reservoir.
Although the glaciers are small from a water security perspective, the Lyell and Maclure glaciers play a huge role in the local ecology. According to Alexander Milner, a professor of river ecosystems at the UK’s University of Birmingham, the disappearance of a glacier can lead to a drastic drop in the biodiversity of the streams fed by those glaciers. A 2012 report he coauthored in Nature Climate Change found that the disappearance of glaciers could lead to an 11 to 38 percent drop in the number of species of macro-invertebrates, mainly insect larvae. “You tend to gain generalists, species that can survive in many conditions, and lose organisms adapted specifically to the cold temperatures,” Milner says. Of course, the loss of the “little” things in the food web often results in a concomitant crash of whatever feeds on them, including fish, amphibians, and birds.
The glacial retreat is merely the most visible evidence of a larger and more troubling phenomenon for California’s human inhabitants: the state’s dwindling snowpack. Researchers speculate that warming temperatures could affect snowlines statewide, jeopardizing its water supply. According to Yosemite’s hydrologist, Jim Roche, as much as three-quarters of California’s drinking water comes from snowmelt.
In Yosemite, a shift in the snowline could reduce the volume and alter the timing of the spring melt that fills Hetch Hetchy Reservoir. According to a 2008 paper from Bruce McGurk, a former hydrologist for Hetch Hetchy Water and Power, the snowline in the Hetch Hetchy watershed is expected to rise from 6,000 feet now to 8,000 feet by the end of the century. Other lower elevation watersheds may be even more vulnerable, including that of Feather River, the largest tributary of California’s largest river, the Sacramento.
Snowpack is projected to decline 25 percent statewide by 2050, according to the Department of Water Resources. Shortfalls of snowpack mean shortfalls in water deliveries – the consequences of which have been seen in the last few years, including water rationing in Los Angeles and the Bay Area, and the fallowing of massive swaths of farmland in the Central Valley. Of particular concern is the diminution of freshwater flowing to the Sacramento-San Joaquin Delta, the vast and beleaguered tidal lowlands connecting the Sierra to the San Francisco Bay. The Delta is also the origin of the California Aqueduct, which delivers irrigation water to the huge farms of the Central Valley and drinking water to 25 million Californians. According to a Scripps Institution of Oceanography report, reduced snowpack could greatly reduce freshwater inflows, leading to increased salinity of the Delta, further threatening ecosystems, farms, and municipal water supplies.
After leaving the weather station, we pause at a small lake filled with aquamarine water. As we fill our bottles, Stock points out the fine dust coating our boots – glacial flour, sign of an active glacier. The final push to the edge of the Lyell Glacier is a grueling climb over hillock after hillock of loose rock. The moraines once marked the edge of the glacier and are comprised of slabs the size of dinner tables, some of which shift unnervingly underfoot.
Once we arrive at the Lyell’s edge, Stock readies the laser rangefinder atop a large slab. Bob Anderson straps on his crampons and summons several volunteers to continue upward onto the ice. As we climb onto the glacier’s ashen skin the sound of flowing water becomes stereophonic. Small runoff channels feed larger ones and the icy water carried within eventually pours into vibrant blue chasms, disappearing with a guttural boom into the deep recesses of the glacier.
Our job is to find the stakes Stock affixed across the mountain during the last four seasons, including several we planted six feet deep in July. With the ferocious melting, the few stakes we find upright are barely anchored to the ice. Others are lost completely.
We set about reattaching the stakes and holding over them a prism, which Stock zeroes in on with the laser rangefinder. It doesn’t take lasers, however, to recognize the Lyell’s regression. There are no crevasses, deep fissures running through its surface, which are telltale signs of glacial action. Stock calls our attention to a barely discernible, bright orange letter “K” spray-painted on a boulder high on the east flank of the mountain – the survey point Pete Devine discovered nearly 30 years ago.
When the mark was established over 80 years ago, one could step right from K and onto the ice. During a subsequent survey in 1949, surveyors noted the surface of the ice had plummeted 53 feet below point K. Stock’s measurements revealed the distance between K and the ice had grown to more than 120 feet. “I’m always reminded that these are not the same glaciers that Muir visited,” Stock tells me. If Muir could somehow be teleported onto the landscape today, he’d be walking on a surface of ice more than 100 feet above our heads.
Stock takes the last of his measurements, aiming his yellow rangefinder at the small specks of the team moving like dust motes against a vast canvas of white. After recording his final readings, Stock issues a shrill “whoop” and the team descends from the ice.
After removing his crampons, Anderson convenes with Stock to discuss the data, which subtly begins to convey the severity of the Lyell’s dissolution. My ears perk up when I hear Anderson utter the phrase, “death knell.”
“Four-hundred point two this versus four-hundred point three last month,” Stock says, flipping through his notebook. “One stake has shown a meter and a half movement per year.”
“And that’s, as you said, the anomaly,” Anderson says. “Everything else is not moving.”
“Everything else is zero,” Stock confirms.
By now the survey team has gathered to hear the grim news.
“Wow, folks,” says Anderson. “This thing is just a-melting away.”
At a presentation Stock and Pete Devine will give the following March for the Yosemite Conservancy, Stock reiterates his findings on the mountain. He projects a chart showing the movement of each of the stakes on the Lyell: Its columns are stacked with zeroes. “It’s not my place to rename features in the park,” Stock said to the small crowd assembled in Yosemite Visitor Center. “But Lyell Glacier is probably not the best name for the feature we see today.”
The morning after our somber findings on the Lyell, we rise again and climb the steep spine of rock toward the met station. Instead of moving into the Lyell’s sun-drenched basin, we veer west, traversing the mountain’s flank, skirting Maclure Lake, a deep, arrowhead-shaped pool of azure water wedged between vertical walls.
The Maclure Glacier covers a much smaller area and has lost about 65 percent of its surface area – the same as the Lyell. And for this reason, I assume the prognosis will be just as dire. And yet the “feel” of the Maclure Glacier and its environs is vastly different from that of the Lyell. Its basin is steeper and narrower, casting long shadows across the ice, which is cut through with dozens of deep crevasses.
There are many signs that the Maclure may still being a living glacier – but only the measurements will tell for sure.
The survey team ascends, repeating the procedures carried out on the Lyell. The work is more arduous on the Maclure’s steeper slopes and care must be taken to avoid the deep fissures that crisscross its surface. As Anderson leads the team to the markers, Stock surveys them and writes the data into his notebook. After a few hours, all the points have been collected.
Stock pores over the measurements and is astounded by what the data conveys. Not only does the Maclure appear to be moving, but it is moving at almost exactly the same rate as John Muir measured in 1872 – one inch every twenty-four hours.
How is this possible? With the extreme loss of ice, simple logic suggests that if the Maclure were moving at all it would be at a fraction of its former rate. And yet, it continues apace. It’s a puzzle, but Anderson and Stock believe the physics of glacial motion may have shifted over 140 years. “You look at this thing and it looks like a solid. But in fact these objects are flowing,” Anderson says. “It can flow in one of two ways – by way of deformation or sliding.”
Deformation, Anderson explains, can be envisioned by thinking about a deck of cards. If you place the deck on the table and then run your hand over the top card, the layers underneath will “flow” in the direction you move your hand, with the uppermost cards moving farthest. Deformation is the mechanism by which the very large arctic glaciers move; the rate of movement is more or less constant and dependent on ice thickness rather than seasonal temperature shifts.
Now that the Maclure has been reduced to a thin sliver of ice, Stock believes sliding may have taken over. (To envision sliding, think of running your hand over a box of cards – the whole set moves across the table together.) The only way sliding is possible, Stock says, is if there is meltwater under the bed, which acts as a kind of lubricant allowing the movement of the entire ice sheet. As melting increases, so does the amount of water funneling through the crevasses to the bedrock below, in turn increasing the glacier’s downhill velocity. In this case, the glacier’s movement might be a symptom of weakness rather than vigor. “Sliding isn’t dependent on the thickness,” says Stock. “You can move even a thin carapace of ice if you’ve just got lubricant underneath it.”
But why has the Maclure continued to move while the Lyell has stopped? Turns out the Maclure’s steep, shaded topography may have played a key role in its “survival” to this point. Perhaps the Maclure and the nearby Dana Glacier – both of which are situated in steep, shaded basins – have been sheltered somewhat from warming temperatures. In contrast, the Lyell sprawls in its wide-open cirque exposed to the full brunt of the sun.
Just how long the Maclure might be able to cheat death, hiding like a fugitive in its dark, remote basin – its glacial “womb” as Muir called it – is unclear. But, for now at least, there is one glacier still living in the High Sierra.
Jeremy Miller’s writing has appeared in Harper’s, Orion, High Country News and Men’s Journal. Photojournalist Tim Palmer is author of the new book California Glaciers (Heyday Press).
For $15 you can get four issues of the magazine, a 50 percent savings off the newsstand rate.
|
Part 1: ARGUMENTS
The entire purpose of the logic we're looking at is assessing arguments. So it'll be helpful to understand exactly what an argument is. Here's a working definition:
ARGUMENT: A series of propositions consisting of premises which are purported to support a conclusion.
Of course, to understand this definition, we need to know what a proposition, premise, and conclusion are.
A proposition is a special kind of statement - it's one that can be true or false. Obviously, questions like 'What time is it?' can't be true or false. And neither can commands like 'Shut the door.'
A proposition says something about the world. Here are some examples of propositions:
1) It is sunny outside.
2) Mercury is the closest planet to the sun.
3) All mammals lay eggs.
Notice that 3 is false. But that's okay, it's still a proposition. Remember, these are statements that can be true OR false. As it turns out, there are some philosophers who have some strong arguments about what is and isn't a meaningful proposition. But that discussion is for an Analytic Philosophy class. We don't really care about these things in logic. If it's something to make sense to say it's true or false, then it's a proposition.
So an argument consists of premises and a conclusion. The premises are propositions that give you a reason to accept the conclusion, which is also a proposition. The conclusion is what you're supposed to, well, conclude! Here's as example:
1) All men are mortal.
2) Socrates is a man.
3) Therefore, Socrates is mortal.
In this argument, 1 and 2 are the premises which support 3, the conclusion. You can usually tell the conclusion by keywords like 'therefore' 'so' and 'thus.'
Look back at the definition of an argument - notice it says that the premises are PURPORTED to support the conclusion. That just means that they are intended to give support - but they may fail miserably. The result would be a bad argument, but it's still an argument. Here's an example:
1) Monkeys like bananas.
2) I like bananas.
3) Therefore, I'm a monkey.
In this argument, the premises lend very little support to the conclusion. You may even have an argument where the premises have nothing at all to do with the conclusion. But these are still arguments - just really bad ones!
So now you know what an argument is. Up next, we'll go over the basics of how to assess an argument. This, remember, is the central goal of logic (at least, the logic we're talking about).
It's worth noting here that the kind of logic we'll be talking about is called PROPOSITIONAL LOGIC. This logic deals with, you guessed it, propositions. Overall, it's very weak - there are many arguments it can't assess.
More powerful logical systems like predicate logic and modal logic can handle more arguments. But you have to walk before you can run, and this kind of logic is a very good place to start. If you can understand this, you'll have a much easier time learning more powerful logical systems.
These are the basics, so if there are any questions, please post them. It's vital that you understand these definitions so that the next part will make sense.
We strive to deliver the best gaming experiences... on the internet and on your mobile phone. Play thousands of free online games for kids, get access to free mmorpg games, online rpg games, fun online flash games, and more.
We offer free flash games in many different genres: online shooting games, online puzzle games, online war games, free online car games, free online hidden object games and dozens more.
This is the best place on the web to play online games for free... play on Armor Games! No matter what game style you prefer, we've got it here.
Gaming websites, passionate bloggers, and quirky streamers are welcome to share or review our games. If you need game keys, art, trailers, screenshots or more check out our press kit website.
Game developer? Visit developers.armorgames.com to request AG developer status, find documentation on our APIs, and get access to our development environment. Let's make sweet, sweet games together.
Looking for a publishing partner that can help your app rocket to success? Contact the mobile team to learn more about how we can help!
|
The Vertical-Cavity Surface-Emitting Laser (VCSEL) was first proposed in 1977 by Professor Kenichi Iga from Tokyo Institute of Technology, who two years later also achieved lasing for the first time under pulsed operation at 77 K.1,2 In earlier research by Ivars Melngailis in 1965, lasing parallel to the current injection was achieved for the first time and some of the advantages of this type of laser structure were highlighted such as easy array formation and low divergent output beam due to coherent emission from a large area.3 Room temperature operation in a VCSEL was achieved4,5 in 1989 and in the mid-1990’s VCSELs became commercially available. Today, more than 100 million VCSELs are produced annually and the price for such a laser for computer-mouse applications is approaching $0.10.6
Applications for today’s VCSELs include short-distance optical links, computer-mouse applications, laser printers, sensors, infrared illumination and tailored infrared power heating systems. They have become a popular light source because of their advantages over their edge-emitting laser counterpart such as high modulation speed at low drive currents, circular symmetric low-divergent output beam, low threshold currents, ease to fabricate into two-dimensional arrays, and low-cost manufacturing due to on wafer-testing.6 If VCSELs were also available with emission wavelengths in the ultraviolet to visible regime, many more applications could benefit from such a light source. For example, in visible light communication current state-of-the-art uses micro-light-emitting diode (LED) arrays. The performance of those emitters is limited by the spectral bandwidth and slow frequency response due to long carrier lifetimes associated with the spontaneous emission process.7,8 A laser would offer a light-source with a much narrower spectrum and much higher modulation speed. GaN-based edge-emitting lasers have been used to transmit 4 Gbit/s with a bit-error-rate of 2.7·10-4 in a 0.15m free-space link9 and more advanced modulation schemes (64-QAM OFDM) have been applied to reach 9 Gbit/s over 5 m in free-space10 with a bit-error-rate of 3.6·10-3. A VCSEL could offer additional advantages compared to an edge-emitting laser, as mentioned above.
In solid-state-lighting, white light sources can be achieved by using blue LEDs with phosphorus coating to yield white emission. However, the blue LEDs are suffering from efficiency droop (reduction in power efficiency at higher drive current densities), and to efficiently generate enough lumens, an LED bulb must contain many LEDs where each is operated at a low-current density. This leads to a large part of the wafer being dedicated to just one LED bulb and results in a high cost. If blue-emitting lasers would be used instead of the LEDs, a high-power conversion efficiency could be achieved at much higher current densities, and the much higher optical output power generated by a single device would correspond to a smaller area to achieve the same lumens, resulting potentially in a lower cost for laser-based lighting systems.11-12 A VCSEL offers a more directional, circular-symmetric beam, which could lead to new and more compact luminaires at a low cost. In addition, the individually addressable elements in a two-dimensional VCSEL array would enable tailor-made, dynamic emission patterns for smart lighting systems.
GaAs-based VCSEL arrays are established devices in laser-printers, and higher resolution, high-speed printers would benefit from VCSEL arrays with shorter emission wavelengths.13 Single-mode and polarization stable VCSELs might also be of interest in high-density optical data storage, most likely only for read-out where 4-5 mW is required, and not for writing where the power levels required are about 30-40 mW,14 much higher than what a single-mode VCSEL can deliver. GaN-VCSELs could also be of interest for laser display applications, such as head-up-displays, near-eye displays15 and picoprojectors, which do not require so much optical output power. Different sensor applications, such as Doppler-based interferometers16, could also benefit from a short-wavelength VCSEL.
The interest for short wavelength (400-500 nm) lasers in bio-sensors and medical diagnosis and treatment is rapidly growing and new applications are continuously arising. Chemical tracking and biological agent detection is enabled by exciting fluorescent dyes and proteins.17 In dentistry, low power lasers are used to detect caries, using fluorescence from hydroxyapatite or bacterial by-products. Moreover, photochemical reactions based on photoactivated dye techniques can be used to disinfect root canals, periodontal pockets, cavity preparations and sites of peri-implantitis.18 In optogenetics, stem cells can be genetically modified to render them sensitive to blue light. Thereby, events in specific cells of living tissue can be controlled, with applications in artificial ears19 or eyes. Blue VCSELs could enable novel designs and facilitate system integration into the body. A 2D-VCSEL array would allow for selective stimulation of a nerve without mechanical repositioning of the laser and the individually addressable lasers gives the possibility to stimulate multiple sites simultaneously. This two-dimensional probing of neural networks has been demonstrated with GaN micro-LED arrays20, and is desired for example in chochlear implants21. Blue LEDs have recently been used in in vivo optogenetic experiments22, where VCSELs could offer additional advantages such as a low-divergent circular-symmetric output beam and improved high-speed performance. In the field of medical diagnosis, the detection of skin and esophagus cancer is now possible without the use of biopsy by using laser-induced fluorescence at a wavelength of 410 nm.23,24 GaN-based lasers have also recently been used in early diagnosis of oral cancer.25 This non-invasive technique is fast, reliable and reduces both pain and recovery time for the patient. In medical treatment, a technique has recently been developed which uses blue light to activate chemotherapy drugs in specific cells, which could lead to more effective chemotherapy for cancer treatment with limited side effects.26
To summarize, even though there are no commercial GaN-based VCSELs yet, there are many applications which would benefit from such a light source, and most likely more applications will appear once such light source is available. It is also encouraging to see many companies investing time and effort into this field, and as a result Nichia27, Panasonic28 and Sony29 now all have demonstrated electrically injected GaN-VCSELs.
The first electrically injected GaN-based VCSEL was demonstrated by the National Chiao Tung University (NCTU) in Taiwan in April 2008.30 It had one bottom epitaxial AlGaN/GaN DBR, one top dielectric DBR, and a 240 nm indium- tin-oxide (ITO) layer as a current spreading layer. It operated under continuous wave (CW) conditions at 77K at a wavelength of 462.8 nm. The threshold current density was 1.8kA/cm2 for a 10-μm device, and the output power was published in arbitrary units. The first room temperature CW operation was reported by Nichia Corporation, Japan, in December 2008.27 This device used two dielectric DBRs, a 50 nm ITO layer, and a device with an 8μm-aperture had a threshold current density of 13.9 kA/cm2 and an output power of 0.14 mW. Publications from a few groups followed, and since the first demonstrations there have been a lot of progress in the field of electrically injected III-nitride based VCSELs, both in terms of new technological solutions as well as performance characteristics.31 To date there are seven groups in the world who have demonstrated lasing under electrical injection27-30,32-47, and the different approaches and structures are summarized in Table 1. The performance characteristics of published devices, in terms of output power and threshold current density, are plotted in Fig. 1. To compare performance characteristics of lasers it is important to study both optical output power and threshold current since they are interlinked, for example a laser with a low top mirror reflectivity can have a high optical output power, but the threshold current would then also be high. To be able to compare published results from GaN-based VCSELs with different aperture sizes, current density instead of current is plotted in Fig. 1.This is however a measure that should be taken with caution, since the injection current and emission profile of GaN-based VCSELs are sometimes far from uniform across the whole aperture, due to filamentary lasing. Continuous-wave operation is indicated in the figure by filled markers and pulsed operation by open markers. The lasing wavelength is also different from case to case, which have been illustrated in the figure by three colored groups; violet (406 nm - 422 nm), blue (446 nm – 463 nm), and green (503 nm).
Different approaches and structures used in electrically injected VCSELs.27-30,32-47
|NCTU 200830 (2010)40||Nichia 200827||Nichia 200944 (2011)45||Panasonic 201228||EPFL 201232||UCSB 201233 201435||Xiamen 201446||NCTU 201442 (2015)43||Sony 201529 (2016)47||UCSB 201536 (2016)38||UCSB 201537|
|Top DBR||8× (10×) Ta2O5/ SiO2||7× Nb2O5/ SiO2||ZrO2/ SiO2||7× TiO2/ SiO2||13× Ta2O5/ SiO2||14× ZrO2/ SiO2||10× Ta2O5/ SiO2||12× (11.5×) Ta2O5/ SiO2||16× Ta2O5/ SiO2||16× Ta2O5/ SiO2|
|Current spreading layer||240nm ITO (30nm ITO and 2nm p+ InGaN)||50nm ITO||100nm ITO||50nm ITO||50nm ITO||30nm ITO and 2nm n- InGaN||40nm annealed ITO||30nm ITO||47nm ITO||141 nm TJ = n++GaN/n- GaN/n++GaN/n-GaN [39.6/39.6/39. 6/22.1] nm|
|p-GaN thickness||120nm (110nm)||97nm p-GaN and 20nm p+ GaN||113nm p-GaN and 14nm p++ GaN||159nm||100nm||100nm||56nm||62.2nm p- GaN+ 14nm p++ GaN|
|Aperture||200nm SiNx||SiO2||SiO2||SiO2||Plasma passivated p-GaN surface||SiNx||SiO2||200nm SiNx||SiO2 (Boron implant.)||Al ion implantation (airgap)||Al ion implantation|
|Aperture diameter||10μm||8μm||8μm (10μm)||20μm||8μm||7μm||10μm||10 μm (5μm)||8μm||12μm||12μm|
|EBL||- (24nm AlGaN)||-||-||p-AlGaN||20nm p-AlGaN||15nm p-AlGaN||20nm p- AlGaN||Graded AlGaN (AlGaN/ GaN muliple barrier)||5nm p-AlGaN||5nm p-AlGaN|
|QWs / Barriers||10× InGaN/ GaN [2.5 nm/ 7.5nm] ([2.5nm/ 12.5nm])||2× InGaN/ GaN [9nm/ 13nm]||5×||5× In0.10GaN/ In0.01GaN [5nm/5nm]||5× In0.12 GaN/G aN [7nm/5nm]||5× Coupled InGaN/GaN [4nm/4nm]||10× In0.1GaN/GaN [2.5nm/ 10nm] (5× In0.15GaN/In0.02 GaN[4nm/8nm])||2× (4×) InGaN/ GaN [6nm/ 10nm]||7× InGaN/GaN [3nm/1nm]||7× InGaN/GaN [3nm/1nm]|
|n-GaN thickness||790nm (860nm)||944nm||902nm||880nm||4μm ELO||50nm n++ and ~770nm n||762nm|
|Cavity length||5λ* (7λ*)||7λ*||~4μm||35λ||7λ*||7.5λ||~2.18μm [13λ]||6.95λ|
|Bottom DBR||29× AlN/GaN with SPSL||11× SiO2/Nb2O5||11× SiO2/Nb2O5||SiO2/ZrO2||42× Al0.82InN/GaN||13× SiO2/Ta2O5||17.5× SiO2/ZrO2||25× AlN/GaN||14.5× SiO2/SiNx||10× Ta2O5/SiO2||12× Ta2O5/SiO2|
|Substrate||Sapphire||Sapphire||FS-GaN||FS-GaN||FS-GaN||FS-GaN m-plane||Sapphire||Sapphire (GaN)||GaN||FS-GaN m-plane||FS-GaN m-plane|
|Substrate removal techn.||-||LLO and CMP||CMP||CMP||-||PEC||LLO, ICP, and CMP||-||Lapped to 80μm and packaged||PEC||PEC|
*The penetration depth into the DBR is not accounted for.
As seen in Fig. 1, there are now several reports on optical output powers close to 1 mW with decent current densities. These performance characteristics can be compared to that of the more mature GaAs-VCSELs emitting at 850 nm, where our standard devices with an 8-μm aperture size have a maximum optical output power around 4-10 mW depending on mirror design and a threshold current density around 0.8-2.0 kA/cm2. The performance of our GaAs-VCSELs with 9 and 11μm apertures are published in Refs.48,49. The comparison in performance characteristics between GaN- and GaAs- VCSELs, shows that there is still room for improvement for GaN-based VCSELs.
A schematic view of an electrically injected GaN-based VCSEL is shown in Fig. 2, and as mentioned previously, the detailed structures are summarized in Table 1. Compared to a GaAs-based VCSEL there are some differences such as the use of one (or even two) dielectric DBRs in GaN-VCSELs due to the lack of a two near lattice-matched materials with high refractive index contrast. The DBRs are not electrically conductive, thus intracavity contacts are always used, and the cavity length is much longer in GaN-VCSELs. Due to the low electrical conductivity of p-GaN, transparent current spreading layers are necessary to allow for efficient current injection from the intracavity contacts to the multiple InGaN quantum wells. Indium-tin-oxide is in most cases employed, except for one recent publication which demonstrated a tunnel junction combined with highly conductive n-GaN on top as the current spreader37. Lateral current confinement has usually been achieved by a dielectric aperture of silicon dioxide or silicon nitride, and lately optically guided structures are also being explored and implemented. In GaAs-VCSELs simultaneous current and optical confinement is achieved by selective oxidation of a high Al-content AlGaAs-layer. Recently, new approaches for current confinement in GaN-VCSELs have been developed such as plasma damage of p-GaN32, ion implantation36,47, and airgaps by photoelectrochemical etching38. A number of key challenges in GaN-VCSEL will be described in more detail in the next chapter.
GaAs-VCSELs profit from the near lattice-matched AlGaAs-material system, which allows for growing crack-free highly reflective Distributed Bragg Reflectors (DBRs). The relatively high refractive index contrast (Δn/nhigh) of 14% at 850 nm for an Al0.12Ga0.88As/Al0.90 Ga0.10As DBR, results in a broad stopband of 75 nm (full-width half-maximum) for a standard bottom DBR with 34 mirror pairs. Since both mirrors in a VCSEL need a reflectivity above 99%, an even more relevant measure would be the width of the spectrum for a reflectivity above 99%, which is 59 nm for the mentioned DBR. Besides a high reflectivity over a broad spectral range, the AlGaAs DBRs can also be electrically conductive, allowing for current injection through the DBRs and contacts can thus be placed on top of the top DBR and on bottom of the bottom DBR. Achieving broadband, high reflectivity, electrically conductive DBRs in the III-nitride-based material system is a big challenge. It is difficult to find two materials with high refractive index contrast that are lattice-matched to each other, and in addition have high electrical conductivities. High peak reflectivities (>99%) have been reached with DBRs in (Al)GaN/Al(Ga)N50-53 as well as in AlInN/GaN54,55. An AlN/GaN DBR has a higher refractive index contrast (about 12% at 420 nm), which is close to that of AlGaAs DBRs, and as a result the 99% stopband width is about 22 nm if 22 mirror pairs are used. However, the DBR suffers from an in-plane lattice mismatch of about 2.5%. Ternary AlGaN could be used instead of binary AlN/GaN to mitigate cracking, but a larger number of pairs is then necessary to reach high enough reflectivity due to the lower index contrast, and the stopband width will thus be narrower. To reduce stress in the DBR, short-period superlattices (SLs) have been used, below the DBR56, and incorporated into the DBR52. The inserted AlN/GaN SLs can suppress the generation of cracks, but cannot reduce the number of the V-defects.57 Another way for strain management is to use thin low-temperature AlN interlayers.53 The interlayers can reduce crack-formation, but can also reduce the overall reflectivity of the DBR, which has to be taken into account when designing the DBR. Spontaneously formed interlayers have also been seen to reduce the peak mirror reflectivity and stopband.58 Selective area growth has also been applied to grow crack-free DBRs in areas up to 150x150 μm,59 sufficiently large for VCSELs. Instead of AlGaN, an epitaxial DBR can be realized using lattice-matched AlInN/GaN to grow strain free highly reflective mirrors60. The composition of AlInN is very sensitive to growth conditions, but if grown correctly, it can result in a basically strain-free DBR allowing for high quality quantum wells to be grown on top. The index contrast of the layers is however lower than AlN/GaN (about 6% at 420 nm), which results in a mirror with a more narrow 99% stopband of about 12 nm if 42 mirror pairs are used. Any deviation in thickness of the DBR layers will translate into a deviation in the center position of the stopband, for example a 3% thickness deviation, results in a 3% shift of the center position of the DBR stopband, which corresponds to 12.5 nm if the mirror is designed for 420 nm. As a result, the 99% stopband could end up outside the 99% stopband of the other DBR if one mirror has 3% thicker layers. Thus, a narrow stopband DBR makes the spectral alignment between DBR reflectivities more challenging as well as their matching with gain spectrum and cavity resonance. There have been very few reports on electrically conductive DBRs61-63 due to the low electrical conductivity of III-nitrides and the many hetero-interfaces necessary in the DBRs (due to the low refractive index contrast of the materials). Thus, intracavity contacts have so far been the only option for GaN-VCSELs.
To avoid the problem with epitaxial DBRs due to the low refractive index contrast and non-lattice-matched materials, an alternative approach is to use dielectric DBRs. Due to the higher refractive index contrast (44% if SiO2/Ti2O2 is used) only about 11 pairs are required to achieve a high reflectivity and the resulting 99% stopband width is as wide as 148nm. A double dielectric scheme does however require either substrate removal or epitaxial lateral overgrowth of the bottom dielectric DBR, making the exact control of the cavity length challenging, see Section 3.3. Substrate removal has been achieved by laser-induced lift-off, polishing and photoelectrochemical etching (PEC) of a sacrificial layer. Laser-induced lift-off is not easily applied to VCSEL structures grown on free-standing GaN, due to the lack of absorption contrast between epitaxial GaN and the GaN substrate. Insertion of light-absorbing structures is possible but might hamper the crystal quality of the VCSEL structure grown on top, since they are usually not perfectly lattice-matched to GaN. The fabrication process requires bonding to a support substrate to be able to handle the thin layer structures that are lifted-off and the layer structure has to have a certain thickness, typically above 4 μm to avoid stress fractures in the GaN film.64 Chemical mechanical polishing (CMP) on the other hand can be applied to remove GaN substrates from epitaxial GaN structures. However, thickness control is a challenge, as is achieving low-thickness variation across a large-area substrate due to the hardness of the GaN material. This method also requires bonding to a support substrate. PEC also requires bonding, but has the advantage of providing a more accurate cavity length control. ELOG does not require bonding, but achieving short cavity lengths and an exact control of the cavity length is challenging, due to the ratio between vertical to horizontal growth rates. Besides the above-mentioned challenges with a double dielectric DBR scheme, it will in addition lead to VCSELs with increased heating thereby faster misalignment of cavity resonance and gain peak, due to the lower thermal conductivity of dielectric DBRs.
Dielectric DBRs have a high refractive index contrast, but an even higher contrast (Δn/nhigh=60%) can be achieved if the DBR consists of GaN and air. In this case only a few periods would be necessary to achieve a high reflectivity, and the stopband will be very broad. In the realization of a semiconductor/air gap DBR, selective wet-etching is crucial as every second layer (sacrificial layer) in the DBR has to be removed to create the air gaps. Unfortunately, III-nitrides are chemically stable in most solutions and thus very difficult to wet etch. There have been a few demonstrations of GaN/air- DBRs, utilizing wet-chemical etching of AlInN65,66, combined electrochemical oxidation and wet-chemical etching of AlInN67 or photoelectrochemical etching of InGaN68, and electrochemical etching of n-doped GaN69,70. Most critical in the fabrication of GaN/airgap DBRs is to avoid collapse of the structure and to achieve a mechanically stable structure with no bending. The thickness of the GaN is often increased to 5λ/4 instead of λ/4 to improve mechanical stability. Another challenge is the often low selectivity in the wet-etch, which results in a non-uniform thickness of the GaN layers in the radial direction, since they are slightly etched in the removal of the sacrificial layer.
An alternative to a DBR is a high refractive index contrast grating (HCG), which is described in more detail in Section 5. HCGs are believed to offer a number of advantages71, such as transverse mode and polarization control and a broader reflectivity spectrum than epitaxially grown DBRs. An additional benefit is the possibility to set the resonance wavelength of the VCSEL by the dimensions of the grating72, which recently was demonstrated in a GaAs-based VCSEL73. By varying the duty cycle and/or the period of the grating, while maintaining the same grating layer thickness, the phase of the reflected wave can be altered without affecting the magnitude of the reflection. A change in phase is effectively the same as changing the cavity length, i.e., by varying the lateral parameters of the grating on nearby VCSELs, individual resonance wavelengths can be achieved for devices from the same epitaxial layer structure. Thus, enabling the fabrication of a multi-wavelength VCSEL array where the emission wavelengths of the individual devices are set in one single post-growth lithography step. A main challenge in the fabrication of a III-nitride-based HCG is to find a sacrificial layer that can be selectively removed without affecting the HCG layer. There have been a few attempts, such as bandgap-selective photoelectrochemical etching of a sacrificial InGaN superlattice to form an AlGaN HCG membrane74 however, with a limited airgap height, and focused-ion-beam etching to create an airgap underneath a GaN-based HCG75, an impractical process for device integration on a wafer-scale. In addition, GaN membrane gratings have been fabricated from a GaN-on-Si structure76-78 by selective etching of Si, and free-standing hafnium-oxide gratings using the same approach79, but applying this concept to fabricate a bottom mirror in a VCSEL is not straightforward, since growth of high-quality GaN on Si for laser applications is very challenging. Due to the difficulties in realizing a III-nitride based HCG structure with an airgap, a grating reflector without an airgap could be used, which previously has been proposed for long-wavelength VCSELs where the mirror fabrication also is complex.80 Such a reflector in GaN has been demonstrated.81 This concept offers a more mechanically rigid structure, but the lower index contrast results in a much smaller fabrication window to achieve a reflectivity above 99%. An alternative approach for an HCG for III- nitride-based light-emitters is a free-standing dielectric HCG in TiO2 realized by a sacrificial SiO2 layer82. It offers approximately the same high index contrast as that of free-standing GaN HCGs, since the refractive index of TiO2 is about 2.6 at a wavelength of 450 nm, with a negligible absorption for wavelengths above 400 nm. In addition, the TiO2 HCG scheme allows for direct integration into many different material systems, since lattice-matching is no longer a prerequisite.
Carrier transport and optical gain
Carrier transport must be optimized along both the vertical and lateral directions in order to achieve uniform carrier distribution among the multiple QWs of the active region and within each QW defined by the current aperture. A vertically uniform carrier distribution among the QWs is a challenge in all GaN-based light-emitters83,84 due to the large difference between the activation energy of donor and acceptor impurities85, the strong imbalance between electron and hole mobilities86, the large band offsets and, possibly even more critical, the spontaneous and piezoelectric polarization effects at heterointerfaces87,88. In order to maximize the radiative recombination in the QWs, one should minimize current crowding89,90 as well as carrier leakage beyond the active region, which can involve both electrons91-93 and holes94 and could be enhanced by Auger95,96 and excited subband recombination97. Electrons can be prevented from leaking into the p-GaN layers by inserting an energy barrier in the conduction band between QWs and p-layers, a so- called electron-blocking layer (EBL). However, an EBL will also affect hole injection into the QWs, and the EBL design is thus crucial for low-threshold operation. Many different EBLs have been proposed and their effect on VCSEL performance has been investigated such as different p-doping levels of the EBL98, compositionally graded EBL99, and multiple barrier EBL100.
The design of the active region is very important, and Table 1 summarizes the different approaches used in electrically injected VCSELs. It is clear that there is so far no consensus on how the active region should be designed. For example, having many QWs (up to 10 have been used) may increase the optical gain per roundtrip in the cavity and the tolerances to spatial misalignment between the maximum of the standing optical wave and the QWs. One drawback is the nonuniform carrier distribution among QWs due to the above-mentioned mechanisms; as a result, it may occur that only the QWs nearest to the p-side contribute to the optical emission, as observed experimentally in InGaN/GaN LEDs101 and predicted by simulations of GaN-based Fabry-Pérot lasers102 and VCSELs103. This reduces the net gain since QWs that are pumped below transparency absorb light. The use of fewer QWs can lead to a more uniform carrier distribution between QWs, but could also result in an increased electron leakage104. Numerical simulations have shown that the insertion of a tunnel junction (TJ) in the middle of 10 QWs could improve the uniformity in carrier distribution and lead to higher output power105. Another approach to achieve more uniform carrier distribution106 and higher optical gain107 is to grow on nonpolar or semipolar planes rather than along the c-plane. III-nitride-based materials have, unlike GaAs, strong built-in electric fields due to spontaneous and piezoelectric polarization. In the QWs, the large electric field perpendicular to the epitaxial layers spatially separates the hole and electron wavefunctions, which leads to a reduced wavefunction overlap and thus a reduced radiative recombination efficiency and an increased threshold current. An increased overlap between hole and electron wavefunctions can also be achieved in QWs grown on polar planes by tailoring the shape of the QW along the growth direction. These are referred to as large-overlap QWs, and both step-function compositional grading108-109 as well as linearly grading110 have been proposed. However, an additional advantage with InGaN QWs on nonpolar and semipolar planes is the anisotropic optical gain leading to VCSELs with a preferred polarization state35 without the use of for example surface gratings111,112.
To achieve low threshold currents, the optical mode must overlap well with the gain region defined by the current aperture. A laterally uniform carrier distribution within the gain region is also desirable to avoid selective excitation of higher order transverse modes. However, simulations show that due to the large difference in mobility between electrons and holes in III-nitrides, the recombination will predominantly take part in the periphery of the aperture and not in the center113. In addition, III-nitride VCSELs must use intracavity contacts due to the non-conductive DBRs. Thus, current spreading layers are necessary, and the transparent conductive oxide indium-tin-oxide (ITO) is most commonly used. However, ITO has a relatively high absorption (about 1000 cm-1), and must thus be placed in a minimum of the standing optical wave to keep the losses low. Any deviation in cavity length from design will dramatically increase the absorption losses. The surface roughness of the ITO should also be low, preferably having a root-mean-square < 1 nm to keep scattering losses low in the VCSEL114. Moreover, it is challenging to achieve a low contact resistivity between p-GaN and ITO, making it difficult for the contact to survive the high current densities required in VCSELs (typically tens of kA/cm2). ITO deposition, usually done by sputtering to achieve high quality115, can easily lead to plasma damage of the p-GaN116,117, preventing a low-resistive contact to be formed. Remote plasma deposition or physical vapor deposition such as electron beam evaporation can be used to avoid plasma damage of the p-GaN. In addition, multilayered ITO deposition at different temperatures has been developed to optimize contact resistivity, ITO properties118 and surface roughness114.
Alternative solutions for a current spreading layer have been investigated, such as thin metal layers119 and graphene120. Metals have low sheet resistance and can provide low-resistive contacts to p-GaN, but have a strong absorbance. If made thin enough and placed very accurately at an optical field node the optical absorption loss can be relatively low, but any deviation from target position will dramatically increase the absorption loss. Moreover, the resistance in very thin metals is much higher than in bulk layers121 and their reliability and degradation at high current densities needs to be investigated. A single layer of graphene has an absorption loss of 2.3%122 in the visible wavelength range, and if placed accurately within the cavity it would lead to a relatively low loss. However, most graphene has so far been transferred to GaN-based LEDs, and as a result of the transfer the contact resistivity is poor. There have been attempts to grow graphene directly on p-GaN123,124. If such a technique can be developed that can provide graphene, or any other twodimensional material, with low resistance, high transparency and ohmic contact to p-GaN that can withstand high current densities, grown under conditions that do not jeopardize the underlying device structure, it may be a way forward. So far, though, the most promising approach to replace ITO is the tunnel junction (TJ). By incorporating a TJ, the topmost part of the p-GaN layer and the current spreading ITO layer can be replaced by a low-resistive n-GaN and the current spreading issue with associated high optical loss can be addressed. In long-wavelength InP-based VCSELs, which share many of the same challenges as GaN-VCSELs such as low p-type conductivity, high metal contact resistance to p-doped material, and poorly-conductive DBRs, tunnel junctions are a standard technology to achieve efficient lateral current spreading125. In order for the TJ to be successful in III-nitride-based VCSELs, in reverse bias it should have a low resistance with low additional voltage drop, low optical absorption and be able to withstand the high current densities. This has been a challenge in III-nitrides due to the large bandgaps and low hole concentrations. In recent work, TJs with thin InGaN layers have been explored to utilize the lower bandgap and polarization fields to reduce the tunneling barrier and facilitate the carrier tunneling126,127. However, the absorption losses, critical for VCSELs, must be investigated and will most likely be higher than if only GaN is used. GdN nanoislands have also been incorporated in the TJ interface to increase the tunneling through midgap states128,129. Very promising are the pure GaN TJs demonstrated recently which have been incorporated into LEDs130,131. They have shown low differential resistance for a complete device (low 10-4 Ωcm2), and ability to withstand current densities exceeding 20 kA/cm2. Buried tunnel junctions have also been demonstrated to laterally confine the current130, and if the introduced structural step from the overgrown TJ transfers into the top DBR, it might also provide simultaneous optical guiding132,133.
In most III-nitride-based VCSELs, the main focus has been on lateral current confinement and not on optical confinement. However, to achieve an efficient device with low threshold it is important to ensure a good lateral overlap between the optical mode and the carriers in the QWs. In fact, it has been shown that most apertures used for current confinement result in optically anti-guided structures with associated high losses132,133, for more details see Section 4. Switching to index-guided structures instead, will contribute to lowering threshold currents and can be an effective way to reduce filamentation, something often seen in GaN-VCSELs27,28,30,32,39,40,46, and also seen in early proton-implanted gain-guided GaAs-VCSELs135,136.
To achieve a VCSEL with a low threshold current it is important to ensure a good spatial overlap between the optical mode and carriers. In the lateral direction this is achieved by having an aperture(s) that confines carriers and optical field to the center of the device, and in the vertical direction this is done by placing the quantum wells (QWs) in a maximum of the standing optical wave. The optically absorbing regions, such as the ITO layer, should be placed at a field minimum, see Fig. 3. Besides this spatial matching, spectral matching between the cavity resonance, gain peak, and reflectivity of the mirrors is also required. A good control of the cavity length is of utmost importance since it will affect both spatial and spectral matching. Table 2 shows that most GaN-VCSELs use a cavity length (distance between the DBRs) of about 7λ or even longer. GaAs-VCSELs, on the other hand, use a shorter cavity length, even as short as 0.5λ, to improve transport and optical confinement in order to push the high-speed performance137-139. There are several reasons for using longer cavity lengths in GaN-VCSELs. The intracavity contacting scheme forces the use of a thick n-GaN to keep the lateral resistance from the n-contact layer low. If there is residual strain in the underlying structure, for example from epitaxial DBRs, the n-GaN must be grown to a certain thickness to allow for high quality QWs to be grown on top. If an epitaxial lateral overgrowth of the DBR is applied instead, the thickness of the n-GaN will be determined by the ratio between lateral and vertical growth rates in combination with the geometry of the overgrown DBR29. If chemical mechanical polishing is used to remove the substrate, the n-GaN will also be thick due to the inaccuracy in the substrate removal process28. In addition, a longer cavity length yields a decreased longitudinal mode separation, making it easier to match one longitudinal mode (or several) to the stopband of the mirrors and the gain spectrum. In a device with a 2-μm cavity length and two dielectric DBRs, the measured longitudinal mode separation was about 7 nm28, while the gain spectral width for InGaN QWs is below 15 nm140 for a carrier concentration of 3·1019cm-3 and the net modal gain will be even narrower. If epitaxial DBRs are used the refractive index contrast is less, and thus the penetration depth into the DBRs will be longer, yielding a much longer effective cavity length for the same physical distance between the DBRs. Thus, devices with one epitaxial DBR usually have a shorter longitudinal mode separation, but on the other hand the reflectivity spectrum that the mode has to fit within is narrower. Moreover, the resonance wavelength (λ) is less sensitive to deviations in cavity length (ΔL) for a longer cavity, i.e. Δλ=λΔL/L.28 This is an advantage since it is very difficult to control the cavity length within a few percent for a GaN-based VCSELs, no matter if epitaxial or dielectric DBRs are used. The drawbacks with a longer cavity are increased absorption loss in the cavity, reduced longitudinal confinement and decreased spontaneous emission factor.
The importance of an accurate cavity length is illustrated in Fig. 4 for the case of two dielectric DBRs and in Fig. 5 for and one dielectric and one epitaxial DBR. The structure and material data used are the same as in Ref.132, except for the bottom DBR, which in Fig. 4 consist of 11 mirror pairs of TiO2/SiO2 and in Fig. 5 42 mirror pairs of AlInN/GaN without any absorption losses. The calculations were performed using the one-dimensional effective index method141 and the deviation in cavity length corresponds to a change in the bottom n-GaN thickness that nominally is 904.4 nm thick. Negative deviation in cavity length corresponds to a shorter cavity length and positive deviation to a longer cavity length. The threshold material gain increases faster with a deviation in cavity length for the VCSEL with an epitaxial DBR, an increase of 100 cm-1 is reached already when the cavity length deviates by less than ±10 nm, while in the double dielectric DBR VCSEL the cavity length can deviate by more than ±35 nm before the same increase of 100 cm-1 occurs in threshold material gain. This is due to the fact that when the cavity length deviates from the optimum value, the resonance wavelength will shift, and will no longer overlap with the maximum reflectivity of the DBR. In the case of a low-refractive index DBR (as an AlInN/GaN DBR), this reduction in mirror reflectivity occurs much faster due to the narrower reflectivity spectrum. Due to the low-refractive index contrast of the epitaxial DBR, the penetration depth of the optical field into the DBR is longer, which results in a longer effective cavity length, and thus a shorter separation of the longitudinal modes. For the examples in the figures the longitudinal mode separation is about 18 nm in the case with one epitaxial DBR and 27 nm for the double dielectric scheme. The shorter longitudinal mode separation does not help much to reduce the threshold material gain required for the epitaxial DBR VCSEL, since it is dominated by the low peak reflectivity and narrow reflectivity spectrum. It can however help a bit in reducing the threshold current since the detuning between resonance wavelength of a longitudinal mode and the gain peak will be less if the longitudinal mode separation is less. On the other hand, if the longitudinal mode separation is too short, multiple longitudinal mode lasing can occur. It should be noted that the resonance wavelength shifts less with a deviation in cavity length for the epitaxial DBR VCSEL, due to the longer effective cavity length. The shift is about 7 nm for a thickness deviation of 30 nm in the epitaxial DBR VCSEL case, and 10 nm or the same thickness deviation in the double dielectric DBR VCSEL. A benefit with the epitaxial DBR is the higher thermal conductivity, which results in a smaller resonance wavelength shift with drive current due to a lower temperature rise, which also affects the overall laser performance. The increase in threshold material gain due to a deviation in cavity length will affect the laser performance, but is not big enough to totally prevent lasing in the case of a double dielectric DBR, since InGaN QWs have proven to be able to deliver a material gain above 2500 cm-1.140 However, in the case of a VCSEL with a bottom epitaxial DBR, the cavity detuning might in fact prevent the laser from reaching threshold.
GUIDING AND ANTIGUIDING EFFECTS
In the early works on electrically injected GaN-based VCSELs, the main focus on apertures was to obtain a good lateral current confinement without paying too much attention to the impact on optical properties 27,28,30,32,33,40,44,45. The method applied was to create an aperture by depositing an electrically non-conductive layer (often SiO2 or SixNy) on the p-GaN on top of the mesa and then fabricating a hole in the center onto which a transparent conductive layer (ITO) was deposited to pass the current through. In 2013, we reported our concern regarding these resulting non-planar structures134, i.e. having a downward step profile in the center (δ>0 in Fig. 7), referred to as a depression in the structure. It was shown that the degree of the depression (or elevation in the case of upward step profile, δ<0) is simply and directly related to the ability of the laser cavity to work as a good waveguide. An optimal design of the optical waveguide is of great importance to achieve a good lateral overlap between the optical mode and carriers and to minimize lateral diffraction and radiation loss.
Fig. 8 shows the layer structure with a depression in the center and the standing optical wave pattern in the center of the device. For some lateral cross-sections, indicated by green dashed lines, the refractive index of the material is lower in the center than in the periphery, while for cross-sections in between the situation is reversed, i.e. the refractive index is higher in the center of the device than in the periphery. Unfortunately, the standing optical field is high at the positions, where the refractive index is low in the center, effectively yielding a structure with low refractive index in the center and high in the periphery, a so-called anti-guided structure. An anti-guided structure is known to lead to a higher lateral radiation loss and thereby higher threshold currents and lower output powers, if lasing is at all possible.
To investigate this further, a three-dimensional (3D) Beam Propagation Method (BPM) was deployed to calculate the threshold gain, accounting for the lateral radiation loss mechanisms such as diffraction and lateral leakage of light132 for a number of different aperture schemes. The calculated threshold material gain as a function of the depression parameter δ is plotted in Fig. 9(a), for the fundamental and the first higher order mode for a cold-cavity. For details about the different aperture schemes, Type A-E, see Ref.132. A strong dependence on the size and sign of the structural depression parameter δ is clearly observed for the fundamental mode and an even more extreme sensitivity for the first higher order mode. For δ<0 the threshold gain is low and fairly constant. For small positive δ on the other hand, the threshold gain is very large and decreases with increasing δ. Fig. 9 (b) shows the threshold material gain, under drive conditions which results in a maximum temperature increase of 70° in the center of the cavity. Thermal lensing effects result in a dramatic decrease in threshold gain for the cases with highest threshold gain, while the cases where δ <0 are hardly affected at all.
To explore the threshold gain’s dependence on the depression δ in more detail, the different contributions to the cavity loss, i.e. absorption, mirror outcoupling and lateral (diffraction and lateral leakage) loss were identified and are summarized in Fig. 10 for the fundamental mode in the cold-cavity case. The absorption and outcoupling losses do not vary much with the depression parameter δ, while the lateral loss is strongly dependent on nm-sized changes in the cavity structure. More information regarding this study can be found in Ref.132,133.
To avoid optically anti-guided structures with high optical losses, i.e. structures with a depression in the center, new current and optical confinement schemes are now being developed by several groups. Cosendey and coworkers at EPFL32, have obtained transverse current confinement using reactive ion etching (RIE)-treatment to not only passivate the p-GaN surface but also etch very slightly in the periphery to form a shallow elevated structure in the center of about 10 nm. Their devices from the same epitaxial material, but with a “standard” SiO2 aperture that creates a depression in the center of the device, did not lase at all. Lai et al. at National Chiao Tung University made a shallow mesa etch in their microcavity structure and observed an improvement on optical confinement and cavity quality factor142. Leonard and coworkers at UCSB fabricated devices incorporating Al ion-implanted apertures in order to achieve a planar design without any depression or elevation in the center36. However, the built-in optical guiding was small due to the small index contrast between the implanted region and the center of the device, but anyway led to devices with a five times lower threshold current than their “standard” silicon nitride apertures. The UCSB team has also realized an airgap aperture by photoelectrochemical lateral undercut etching to selectively remove the multiple QWs outside the aperture region. This led to a strong lateral optical confinement due to the large refractive index contrast between airgap and QWs and no filamentation in the laser38.
TIO2/AIR HIGH CONTRAST GRATING REFLECTORS FOR GAN-BASED VCSELS
For GaN-based microcavity light emitters, such as VCSELs and resonant cavity light emitting diodes (RCLEDs) in the blue-green wavelength regime, to obtain a highly reflective feedback mirror with wide bandwidth is truly challenging. In this respect, the high contrast grating (HCG) structures have been quite popular recently as the top reflectors in VCSELs thanks to their potential advantages, over the much thicker conventional distributed Bragg reflectors (DBRs), such as broadband high reflectivity, wavelength setting capability, transverse mode control, and polarization selectivity. Very high performance HCGs have been demonstrated in different material systems such as GaAs, AlGaAs, InP, Si, etc. However much inferior performance has been achieved for GaN-based HCGs due to difficulties in this material system: there is no simple way to find a sacrificial layer with high wet etch selectivity to GaN. Attempts to realize III-nitride membrane type HCG structures for the visible regime have been limited to bandgap selective photoelectrochemical (PEC) etching of InGaN superlattices74, and focused-ion-beam (FIB) etching75, to make airgaps.
We proposed a new approach to achieve HCGs for GaN-based VCSELs using dielectric materials instead82. TiO2 is a dielectric material which has a refractive index similar to that of GaN and has a very low optical absorption for wavelengths above 400 nm. Moreover, a high etch selectivity can be achieved between SiO2 and TiO2 making them very suitable to be used as the sacrificial and grating materials, respectively.
Rigorous coupled-wave analysis (RCWA) simulations have been performed to design the free-standing TiO2 HCGs143. The resulting reflectance contour plots (with reflectivities above 99%) versus duty cycle and period (Λ) are shown in Fig. 11, for both the electric field perpendicular to the gratings bars (TM polarized light) and parallel to the grating bars (TE polarized light). The gratings were designed for a wavelength of 450 nm, where the refractive index of TiO2 is 2.6. As seen in the figure, for TM polarized light, the tolerance window for fabrication imperfection is larger than for TE polarized light, thus the gratings were designed to yield high reflectivity for TM polarized light.
To fabricate the HCGs, a Ni/SiO2-hard mask was defined by e-beam lithography and lift-off of the Ni-layer and subsequent dry etching of the SiO2. The pattern was transferred into the TiO2 by dry etching, the Ni removed in Ni-Cr-etchant, and the sacrificial SiO2-layer was under-etched in a buffered oxide etch. Critical point drying was used to avoid stiction and collapse of the grating bars. SEM-images of the fabricated free-standing TiO2 HCG structures are shown in Fig. 12.
The fabricated HCGs were fabricated to fit onto VCSEL mesas, and therefore had a quite small diameter of less than 20μm. Therefore, a micro-reflectance measurement setup was employed for characterization. However, since the microscope objectives have a certain numerical aperture the illumination and capture angle will not only be normally incident. In the setup used, the acceptance cone included angles up to 6°. More details about the characterization method and constraints to be able to estimate the absolute reflectance of these HCG structures can be found in Ref.82.
Fig. 13 contains the extracted reflectance spectra for both TM and TE polarized light. Peak reflectance values exceeding 95% near the center wavelength of 435 nm with a full-width at half-maximum (FWHM) for the stopband of about 80 nm are achieved for the TM polarized light. The peak reflectance for the TE polarized light is however only 30% lower which is a much smaller difference than the theoretical predictions indicate. This is because it is not possible to isolate the reflectivity of the HCG from that of the structure beneath. The HCG is poorly reflective for TE polarized light and therefore a large reflection comes from the air/Si substrate interface, which has been included in the simulations. To get a good agreement between measured reflectivity spectrum and simulated it is important to also consider the finite acceptance angle of 6° in the measurement set-up, which is included in the simulations by averaging the reflectance values for different angles of the incoming light. The difference between including and not including non-normally incident light in the simulations can be seen by comparing Fig. 13 (a) and (b).
The realization of electrically injected GaN-based VCSELs is challenging, but the progress in recent years is encouraging. Several groups have now demonstrated electrically pumped devices with an optical output power close to 1 mW and threshold current densities between 3-16 kA/ cm2. Some of the key challenges are to achieve high-reflectivity mirrors, vertical and lateral carrier confinement, efficient lateral current spreading, accurate cavity length control and lateral mode confinement. In this paper we have summarized state-of-the-art results and highlighted our work on combined lateral current and optical mode confinement, where we show that many structures used for current confinement result in unintentionally optically anti-guided resonators. Such resonators can have a very high optical loss, which easily doubles the threshold gain for lasing. We have also presented an alternative to distributed Bragg reflectors as high-reflectivity mirrors, namely TiO2/air high contrast gratings (HCGs). Fabricated HCGs of this type show a high reflectivity (>95%) over a 25 nm wavelength span.
The authors would like to thank Henrik Frederiksen, Mats Hagberg, and Bengt Nilsson from the MC2 Nanofabrication Laboratory at Chalmers University of Technology, Sweden, for technological support in the processing, and Nicolas Grandjean and co-workers at EPFL, Switzerland for discussions on the layer structures used in GaN-VCSELs and providing the micro-reflectance measurement setup. This work was funded by the Swedish Energy Agency, Swedish Foundation for Strategic Research, and the Swedish Research Council.
|
see Business Cycle Theories
See Classical Economics and Fiscal Policies
Before Adam Smith cleared the air with his Wealth of Nations in 1776, most nations’ policies were grounded in mercantilism, a doctrine that failed to differentiate money from wealth. Gold and silver were thought to be real wealth, so European countries often engaged in wars and colonial expansion to find these precious metals. Losers in wars of conquest were forced to pay winners out of their national treasuries. Aztec gold and Inca silver poured into Europe. Monarchs often debased their currencies to finance Old World wars and New World colonies. Whether debasement or foreign conquest enriched the royal coffers, the money in circulation grew.
The British philosopher David Hume (1711–1776) was among the early economic thinkers who noted that rapid monetary growth triggers inflation.
Quantity theories of money identify the money supply as the primary determinant of nominal spending and, ultimately, the price level.
Quantity theories of money were first formalized about a century ago by economists at Cambridge University and by Irving Fisher of Yale University. Fisher’s analysis began with the equation of exchange.
The Equation of Exchange
Gross Domestic Product can be written as PQ because GDP has price level (P) and real output (Q) components. But how is the money supply related to GDP? Economists approach this question by computing how many times, on average, money changes hands annually for purchases of final output. For example, GDP in 1994 was roughly $7 trillion and the money supply (M1) averaged about $1 trillion, so the average dollar was used roughly seven times for purchases of output produced in 1994.
The average number of times a unit of money is used annually is called the income velocity (V) of money.
Velocity is computed by dividing GDP by the money supply: V = PQ/M. Multiplying both sides of V = PQ/M by M yields MV = PQ, a result called the equation of exchange.
The equation of exchange is written
M ¥ V = P ¥ Q
This equation is definitionally true given our computation of velocity1 and is interpreted as the quantity of money times its velocity is equal to the price level times real output, which equals GDP. Note that this equation suggests that the velocity of money is just as important as the quantity of money in circulation.
A rough corollary is that the percentage change in the money supply plus the percentage change in velocity equals the percentage change in the price level plus the percentage change in real output:2
Focus on the right side of this equation. Does it make sense that if the price of, say, tea bags rose 1% and you cut your purchases 2%, your spending on tea would fall about 1%. Intuitively, the percentage change in price plus the percentage change in quantity roughly equals the percentage change in spending. Reexamine the equation. Suppose output grew 3%, velocity did not change, and the money supply rose 7%. Average prices would rise 4% (7% + 0% = 4% + 3%). Learning these relationships will help you comprehend arguments between classical monetary theorists and their detractors.
The Crude Quantity Theory of Money
From certain assumptions about the variables in the equation of exchange (M, V, P, and Q), classical economists (including Fisher) conclude that, in equilibrium, the price level (P) is exactly proportional to the money supply (M). Let us see how they arrived at this conclusion.
• Constancy of Velocity
Classical economic reasoning views the income velocity (V) of money as determined solely by institutional factors, such as the organizational structure and efficiency of banking and credit, and by people’s habitual patterns of spending money after receiving income. Velocity is thought to be constant,
at least in the short run, because changes tend to occur slowly (a) in financial technologies (e.g., the ways checks clear or loans are granted or repaid) and (b) in the inflows and outflows of individuals’ money (people’s spending habits and their frequencies of receipts of incomes).3 Thus, we see a central assumption of the classical quantity theory:
Focus 1 reveals, however, that assuming constant velocity would be unrealistic for international monetary data in recent years. Nor would this assumption fit U.S. data for different measures of the money supply; between 1970 and 1993, for example, velocity for M1 increased by roughly 40%, while velocity for M2 was relatively constant and velocity for M3 fell by roughly 15%.
But why does classical economics view velocity (V) as unaffected by the price level (P), the level of real output (Q), or the money supply (M)? An answer lies in why people demand money. Classical macroeconomics assumes that people want to hold money only to consummate transactions and that people’s spendings are fixed proportions of their incomes. The transactions motive is basically classical. Since National Income is roughly equal to GDP (or P ¥ Q), then the demand for money Md (a transactions demand) can be written
is a constant proportion of income held in monetary balances.4 For example, if each family held one-fifth of its average income of $10,000 in the form of money, then the average quantity of money each family would demand would be Md = 0.20($10,000) = $2,000. The quantity of money demanded in the economy would be $2,000 times the number of families.
• Constancy of Real Output Classical theory also assumes that real output (Q) does not depend on the other variables (M, V, and P) in the equation of exchange. Classical economists believe the natural state of the economy is full employment, so real output is influenced solely by the state of technology and by resource availability. Full employment is ensured by Say’s Law if prices, wages, and interest rates are perfectly flexible. Moreover, both technology and the amounts of resources available are thought to change slowly, if at all, in the short run. Thus, real output (Q) is assumed to be approximately constant, and DQ/Q = 0. This may seem like a very strong assertion, but the intuitive appeal of the idea that real output is independent of the quantity of money (M), its velocity (V), or the price level (P), is convincing both to classical monetary theorists and to new classical economists who have updated the classical tradition.
The idea that the amount of paper currency or coins issued by the government has virtually no effect on the economy’s productive capacity seems reasonable. Similarly, the velocity of money should not influence capacity. But what about the price level? After all, the law of supply suggests that the quantities of individual goods and services supplied will be greater the higher the market prices are. Shouldn’t the nation’s output increase if the price level rises? Classical economists say no. Here is why.
• A Crude Monetary Theory of the Price Level
Suppose your income and the values of all your assets exactly double. (That’s the good news.) Now suppose that the prices of everything you buy and all your debts also precisely double. (That’s the bad news.) Should your behavior change in any way? Your intuition should suggest not. Using similar logic, classical economists conclude that, in the long run, neither real output nor any other aspect of “real” economic behavior is affected by changes in the price level. Economic behavior is shaped by relative prices, not the absolute price level.
Recall that the percentage changes in the money supply and velocity roughly equal the percentage changes in the price level and real output. If velocity is constant and output is stable at full employment in the short run, then
Classical economists are left with a fixed relationship between the money supply (M) and the price level (P). In equilibrium, the rate of inflation exactly equals the percentage growth rate of the money supply:
Thus, any acceleration of monetary growth would not affect real output, just inflation.
The Classical View of Investment
Firms buy machinery, construct buildings, or attempt to build up inventories whenever they expect the gross returns on these investments to exceed the total costs of acquiring them. Classical economists assume relatively stable and predictable economies, so they focus on the costs of acquiring investment goods; business investors’ expectations of profits are assumed realized, and the costs of new capital goods are presumed stable.
Equilibrium investment occurs when the expected rate of return on investment equals the interest rate. Prices for capital equipment are fairly stable, so any changes in the costs of acquiring capital primarily result from changes in interest rates. [Investors are effectively trading dimes for dollars as long as the cost of borrowing (the interest rate) is less than the return from investments made possible by borrowing.] Naturally, people will not invest unless they expect a return at least as high as they would receive if they simply lent their own money out for the interest.
Classical theorists view investment as very sensitive to the interest rate and believe that large swings in investment follow minute changes in interest rates. The expected rate of return (r) curve in Figure 6 is relatively sensitive, or flat. In this example, a decline in interest of 1/2 point (from 8% to 7.5%) boosts investment by 60% [(80 – 50)/50 = 30/50 = 0.60]. Flexible interest rates and a highly sensitive investment (rate of return) schedule easily equate planned saving and investment. All saving is invested, stabilizing the economy at full employment.
Classical Monetary Transmission
Classical monetary economists view linkages between the money supply and National Income as not only strong, but direct. This classical monetary transmission mechanism (how money enters the economy) is shown in Figure 7. Panel A reflects the effects of monetary changes on nominal income, and Panel B translates these changes into effects on real output.
Nominal income in Panel A is $6 trillion (point a) if the money supply is initially $1 trillion (Md). Note that
is the full employ- ment level of output and MV = PQ, so
. This figure initially assumes that
and, thus, that
, or almost 1%. This $6-trillion nominal income
is equal to 6 trillion units of real output (point a in Panel B) at an average price level P0 of 100
Money supply growth to $1.5 trillion (M1) boosts nominal income to $9 trillion (point b in Panel A). Output is fixed at full employment
and velocity is constant at 6, so introducing this extra money into the economy increases Aggregate Demand from AD0 to AD1 which pushes the price level to 150
Thus, in a classical world, monetary policy shifts Aggregate Demand up or down along a vertical Aggregate Supply curve with only price effects, not quantity effects.
• Summary: The Crude Quantity Theory
Sum-marizing the traditional classical foundations of the early crude quantity theory of money, we know that the equation of exchange is a truism because of the way velocity is computed: MV = PQ. It follows that
If velocity is assumed constant
and real output is fixed at a full employment level
then DV/V = 0, and DQ/Q = 0. Moreover,
Thus, any changes in the money supply will be reflected in proportional changes in the price level. This is the major result of the crude classical quantity theory of money:
Another conclusion is that real output (or any other “real” economic behavior) is unaffected in the long run by either the money supply or the price level. These early versions of the quantity theory of money are clearly misnamed—they should be called monetary theories of the price level.
Classical theorists concluded by saying “Money is a evil.” By this they meant that money, inflation, or deflation may temporarily disguise the real world, but in the long run, money affects only the price level and has virtually no effect on such real variables as production, employment, labor force participation, unemployment, or relative prices. Even though classical theorists vehemently opposed large expansions of the money supply because of fear that inflation temporarily distorts behavior, it is probably fair to say that classical monetary theory leads to the conclusion that, in the long run, “money does not matter.” It does not affect production, consumption, investment, or any other “real” economic behavior. When we deal graphically with the demand and supply of money in later sections, we will resurrect these classical propositions to see how modern monetary theory treats them.
|
The definition of a solid appears obvious; a solid is generally thought of as being hard and firm. Upon inspection, however, the definition becomes less straightforward. A cube of butter, for example, is hard after being stored in a refrigerator and is clearly a solid. After remaining on the kitchen counter for a day, the same cube becomes quite soft, and it is unclear if the butter should still be considered a solid. Many crystals behave like butter in that they are hard at low temperatures but soft at higher temperatures. They are called solids at all temperatures below their melting point. A possible definition of a solid is an object that retains its shape if left undisturbed. The pertinent issue is how long the object keeps its shape. A highly viscous fluid retains its shape for an hour but not a year. A solid must keep its shape longer than that.
Basic units of solids
The basic units of solids are either atoms or atoms that have combined into molecules. The electrons of an atom move in orbits that form a shell structure around the nucleus. The shells are filled in a systematic order, with each shell accommodating only a small number of electrons. Different atoms have different numbers of electrons, which are distributed in a characteristic electronic structure of filled and partially filled shells. The arrangement of an atom’s electrons determines its chemical properties. The properties of solids are usually predictable from the properties of their constituent atoms and molecules, and the different shell structures of atoms are therefore responsible for the diversity of solids.
All occupied shells of the argon (Ar) atom, for example, are filled, resulting in a spherical atomic shape. In solid argon the atoms are arranged according to the closest packing of these spheres. The iron (Fe) atom, in contrast, has one electron shell that is only partially filled, giving the atom a net magnetic moment. Thus, crystalline iron is a magnet. The covalent bond between two carbon (C) atoms is the strongest bond found in nature. This strong bond is responsible for making diamond the hardest solid.
Long- and short-range order
A solid is crystalline if it has long-range order. Once the positions of an atom and its neighbours are known at one point, the place of each atom is known precisely throughout the crystal. Most liquids lack long-range order, although many have short-range order. Short range is defined as the first- or second-nearest neighbours of an atom. In many liquids the first-neighbour atoms are arranged in the same structure as in the corresponding solid phase. At distances that are many atoms away, however, the positions of the atoms become uncorrelated. These fluids, such as water, have short-range order but lack long-range order. Certain liquids may have short-range order in one direction and long-range order in another direction; these special substances are called liquid crystals. Solid crystals have both short-range order and long-range order.
Solids that have short-range order but lack long-range order are called amorphous. Almost any material can be made amorphous by rapid solidification from the melt (molten state). This condition is unstable, and the solid will crystallize in time. If the timescale for crystallization is years, then the amorphous state appears stable. Glasses are an example of amorphous solids. In crystalline silicon (Si) each atom is tetrahedrally bonded to four neighbours. In amorphous silicon (a-Si) the same short-range order exists, but the bond directions become changed at distances farther away from any atom. Amorphous silicon is a type of glass. Quasicrystals are another type of solid that lack long-range order.
Most solid materials found in nature exist in polycrystalline form rather than as a single crystal. They are actually composed of millions of grains (small crystals) packed together to fill all space. Each individual grain has a different orientation than its neighbours. Although long-range order exists within one grain, at the boundary between grains, the ordering changes direction. A typical piece of iron or copper (Cu) is polycrystalline. Single crystals of metals are soft and malleable, while polycrystalline metals are harder and stronger and are more useful industrially. Most polycrystalline materials can be made into large single crystals after extended heat treatment. In the past blacksmiths would heat a piece of metal to make it malleable: heat makes a few grains grow large by incorporating smaller ones. The smiths would bend the softened metal into shape and then pound it awhile; the pounding would make it polycrystalline again, increasing its strength.
Categories of crystals
Test Your Knowledge
Helium: Fact or Fiction?
Crystals are classified in general categories, such as insulators, metals, semiconductors, and molecular solids. A single crystal of an insulator is usually transparent and resembles a piece of glass. Metals are shiny unless they have rusted. Semiconductors are sometimes shiny and sometimes transparent but are never rusty. Many crystals can be classified as a single type of solid, while others have intermediate behaviour. Cadmium sulfide (CdS) can be prepared in pure form and is an excellent insulator; when impurities are added to cadmium sulfide, it becomes an interesting semiconductor. Bismuth (Bi) appears to be a metal, but the number of electrons available for electrical conduction is similar to that of semiconductors. In fact, bismuth is called a semimetal. Molecular solids are usually crystals formed from molecules or polymers. They can be insulating, semiconducting, or metallic, depending on the type of molecules in the crystal. New molecules are continuously being synthesized, and many are made into crystals. The number of different crystals is enormous.
Crystals can be grown under moderate conditions from all 92 naturally occurring elements except helium, and helium can be crystallized at low temperatures by using 25 atmospheres of pressure. Binary crystals are composed of two elements. There are thousands of binary crystals; some examples are sodium chloride (NaCl), alumina (Al2O3), and ice (H2O). Crystals can also be formed with three or more elements.
The unit cell
A basic concept in crystal structures is the unit cell. It is the smallest unit of volume that permits identical cells to be stacked together to fill all space. By repeating the pattern of the unit cell over and over in all directions, the entire crystal lattice can be constructed. A cube is the simplest example of a unit cell. Two other examples are shown in Figure 1. The first is the unit cell for a face-centred cubic lattice, and the second is for a body-centred cubic lattice. These structures are explained in the following paragraphs. There are only a few different unit-cell shapes, so many different crystals share a single unit-cell type. An important characteristic of a unit cell is the number of atoms it contains. The total number of atoms in the entire crystal is the number in each cell multiplied by the number of unit cells. Copper and aluminum (Al) each have one atom per unit cell, while zinc (Zn) and sodium chloride have two. Most crystals have only a few atoms per unit cell, but there are some exceptions. Crystals of polymers, for example, have thousands of atoms in each unit cell.
Structures of metals
The elements are found in a variety of crystal packing arrangements. The most common lattice structures for metals are those obtained by stacking the atomic spheres into the most compact arrangement. There are two such possible periodic arrangements. In each, the first layer has the atoms packed into a plane-triangular lattice in which every atom has six immediate neighbours. Figure 2 shows this arrangement for the atoms labeled A. The second layer is shaded in the figure. It has the same plane-triangular structure; the atoms sit in the holes formed by the first layer. The first layer has two equivalent sets of holes, but the atoms of the second layer can occupy only one set. The third layer, labeled C, has the same structure, but there are two choices for selecting the holes that the atoms will occupy. The third layer can be placed over the atoms of the first layer, generating an alternate layer sequence ABABAB . . ., which is called the hexagonal- closest-packed (hcp) structure. Cadmium and zinc crystallize with this structure. The second possibility is to place the atoms of the third layer over those of neither of the first two but instead over the set of holes in the first layer that remains unoccupied. The fourth layer is placed over the first, and so there is a three-layer repetition ABCABCABC . . ., which is called the face-centred cubic (fcc), or cubic-closest-packed, lattice. Copper, silver (Ag), and gold (Au) crystallize in fcc lattices. In the hcp and the fcc structures the spheres fill 74 percent of the volume, which represents the closest possible packing of spheres. Each atom has 12 neighbours. The number of atoms in a unit cell is two for hcp structures and one for fcc. There are 32 metals that have the hcp lattice and 26 with the fcc. Another possible arrangement is the body-centred cubic (bcc) lattice, in which each atom has eight neighbours arranged at the corners of a cube. Figure 3A shows the cesium chloride (CsCl) structure, which is a cubic arrangement. If all atoms in this structure are of the same species, it is a bcc lattice. The spheres occupy 68 percent of the volume. There are 23 metals with the bcc arrangement. The sum of these three numbers (32 + 26 + 23) exceeds the number of elements that form metals (63), since some elements are found in two or three of these structures.
The fcc structure is also found for crystals of the rare gas solids neon (Ne), argon (Ar), krypton (Kr), and xenon (Xe). Their melting temperatures at atmospheric pressure are: Ne, 24.6 K; Ar, 83.8 K; Kr, 115.8 K; and Xe, 161.4 K.
Structures of nonmetallic elements
The elements in the fourth row of the periodic table—carbon, silicon, germanium (Ge), and α-tin (α-Sn)—prefer covalent bonding. Carbon has several possible crystal structures. Each atom in the covalent bond has four first-neighbours, which are at the corners of a tetrahedron. This arrangement is called the diamond lattice and is shown in Figure 3C. There are two atoms in a unit cell, which is fcc. Large crystals of diamond are valuable gemstones. The crystal has other interesting properties; it has the highest sound velocity of any solid and is the best conductor of heat. Besides diamond, the other common form of carbon is graphite, which is a layered material. Each carbon atom has three coplanar near neighbours, forming an arrangement called the honeycomb lattice. Three-dimensional graphite crystals are obtained by stacking similar layers.
Another form of crystalline carbon is based on a molecule with 60 carbon atoms called buckminsterfullerene (C60). The molecular shape is spherical. Each carbon is bonded to three neighbours, as in graphite, and the spherical shape is achieved by a mixture of 12 rings with five sides and 20 rings with six sides. Similar structures were first visualized by the American architect R. Buckminster Fuller for geodesic domes. The C60 molecules, also called buckyballs, are quite strong and almost incompressible. Crystals are formed such that the balls are arranged in an fcc lattice with a one-nanometre spacing between the centres of adjacent balls. The similar C70 molecule has the shape of a rugby ball; C70 molecules also form an fcc crystal when stacked together. The solid fullerenes form molecular crystals, with weak binding—provided by van der Waals interactions—between the molecules.
Many elements form diatomic gases: hydrogen (H), oxygen (O), nitrogen (N), fluorine (F), chlorine (Cl), bromine (Br), and iodine (I). When cooled to low temperature, they form solids of diatomic molecules. Nitrogen has the hcp structure, while oxygen has a more complex structure.
The most interesting crystal structures are those of elements that are neither metallic, covalent, nor diatomic. Although boron (B) and sulfur (S) have several different crystal structures, each has one arrangement in which it is usually found. Twelve boron atoms form a molecule in the shape of an icosahedron (Figure 4). Crystals are formed by stacking the molecules. The β-rhombohedral structure of boron has seven of these icosahedral molecules in each unit cell, giving a total of 84 atoms. Molecules of sulfur are usually arranged in rings; the most common ring has eight atoms. The typical structure is α-sulfur, which has 16 molecules per unit cell, or 128 atoms. In the common crystals of selenium (Se) and tellurium (Te), the atoms are arranged in helical chains, which stack like cordwood. However, selenium also makes eight-atom rings, similar to sulfur, and forms crystals from them. Sulfur also makes helical chains, similar to selenium, and stacks them together into crystals.
Structures of binary crystals
Binary crystals are found in many structures. Some pairs of elements form more than one structure. At room temperature, cadmium sulfide may crystallize either in the zinc blende or wurtzite structure. Alumina also has two possible structures at room temperature, α-alumina (corundum) and β-alumina. Other binary crystals exhibit different structures at different temperatures. Among the most complex crystals are those of silicon dioxide (SiO2), which has seven different structures at various temperatures and pressures; the most common of these structures is quartz. Some pairs of elements form several different crystals in which the ions have different chemical valences. Cadmium (Cd) and phosphorus (P) form the crystals Cd3P2, CdP2, CdP4, Cd7P10, and Cd6P7. Only in the first case are the ions assigned the expected chemical valences of Cd2+ and P3-.
Among the binary crystals, the easiest structures to visualize are those with equal numbers of the two types of atoms. The structure of sodium chloride is based on a cube. To construct the lattice, the sodium and chlorine atoms are placed on alternate corners of a cube, and the structure is repeated (Figure 3B). The structure of the sodium atoms alone, or the chlorine atoms alone, is fcc and defines the unit cell. The sodium chloride structure thus is made up of two interpenetrating fcc lattices. The cesium chloride lattice (Figure 3A) is based on the bcc structure; every other atom is cesium or chlorine. In this case, the unit cell is a cube. The third important structure for AB (binary) lattices is zinc blende (Figure 3D). It is based on the diamond structure, where every other atom is A or B. Many binary semiconductors have this structure, including those with one atom from the third (boron, aluminum, gallium [Ga], or indium [In]) and one from the fifth (nitrogen, phosphorus, arsenic [As], or antimony [Sb]) column of the periodic table (GaAs, InP, etc.). Most of the chalcogenides (O, S, Se, Te) of cadmium and zinc (CdTe, ZnSe, ZnTe, etc.) also have the zinc blende structure. The mineral zinc blende is ZnS; its unit cell is also fcc. The wurtzite structure is based on the hcp lattice, where every other atom is A or B. These four structures comprise most of the binary crystals with equal numbers of cations and anions.
The fullerene molecule forms binary crystals MxC60 with alkali atoms, where M is potassium (K), rubidium (Rb), or cesium (Cs). The fullerene molecules retain their spherical shape, and the alkali atoms sit between them. The subscript x can take on several values. A compound with x = 6 (e.g., K6C60) is an insulator with the fullerenes in a bcc structure. The case x = 4 is an insulator with the body-centred tetragonal structure, while the case x = 3 is a metal with the fullerenes in an fcc structure. K3C60, Rb3C60, and Cs3C60 are superconductors at low temperatures.
Alloys are solid mixtures of atoms with metallic properties. The definition includes both amorphous and crystalline solids. Although many pairs of elements will mix together as solids, many pairs will not. Almost all chemical entities can be mixed in liquid form. But cooling a liquid to form a solid often results in phase separation; a polycrystalline material is obtained in which each grain is purely one atom or the other. Extremely rapid cooling can produce an amorphous alloy. Some pairs of elements form alloys that are metallic crystals. They have useful properties that differ from those exhibited by the pure elements. For example, alloying makes a metal stronger; for this reason alloys of gold, rather than the pure metal, are used in jewelry.
Atoms tend to form crystalline alloys when they are of similar size. The sizes of atoms are not easy to define, however, because atoms are not rigid objects with sharp boundaries. The outer part of an atom is composed of electrons in bound orbits; the average number of electrons decreases gradually with increasing distance from the nucleus. There is no point that can be assigned as the precise radius of the atom. Scientists have discovered, however, that each atom in a solid has a characteristic radius that determines its preferred separation from neighbouring atoms. For most types of atom this radius is constant, even in different solids. An empirical radius is assigned to each atom for bonding considerations, which leads to the concept of atomic size. Atoms readily make crystalline alloys when the radii of the two types of atoms agree to within roughly 15 percent.
Two kinds of ordering are found in crystalline alloys. Most alloys at low temperature are binary crystals with perfect ordering. An example is the alloy of copper and zinc. Copper is fcc, whereas zinc is hcp. A 50-percent-zinc–50-percent-copper alloy has a different structure—β-brass. At low temperatures it has the cesium chloride structure: a bcc lattice with alternating atoms of copper and zinc and a cubic unit cell. If the temperature is raised above 470° C, however, a phase transition to another crystalline state occurs. The ordering at high temperature is also bcc, but now each site has equal probability of having a copper or zinc atom. The two types of atoms randomly occupy each site, but there is still long-range order. At all temperatures, thousands of atoms away from a site, the location of the atom site can be predicted with certainty. At temperatures below 470° C one also knows whether that site will be occupied by a copper or zinc atom, while above 470° C there is an equal likelihood of finding either atom. The high-temperature phase is crystalline but disordered. The disorder phase is obtained through a partial melting, not into a liquid state but into a less ordered one. This behaviour is typical of metal alloys. Other common alloys are steel, an alloy of iron and carbon; stainless steel, an alloy of iron, nickel (Ni), and chromium (Cr); pewter and solder, alloys of tin and lead (Pb); and britannia metal, an alloy of tin, antimony, and copper.
A crystal is never perfect; a variety of imperfections can mar the ordering. A defect is a small imperfection affecting a few atoms. The simplest type of defect is a missing atom and is called a vacancy. Since all atoms occupy space, extra atoms cannot be located at the lattice sites of other atoms, but they can be found between them; such atoms are called interstitials. Thermal vibrations may cause an atom to leave its original crystal site and move into a nearby interstitial site, creating a vacancy-interstitial pair. Vacancies and interstitials are the types of defects found in a pure crystal. In another defect, called an impurity, an atom is present that is different from the host crystal atoms. Impurities may either occupy interstitial spaces or substitute for a host atom in its lattice site.
There is no sharp distinction between an alloy and a crystal with many impurities. An alloy results when a sufficient number of impurities are added that are soluble in the host metal. However, most elements are not soluble in most crystals. Crystals generally can tolerate a few impurities per million host atoms. If too many impurities of the insoluble variety are added, they coalesce to form their own small crystallite. These inclusions are called precipitates and constitute a large defect.
Germanium is a common impurity in silicon. It prefers the same tetrahedral bonding as silicon and readily substitutes for silicon atoms. Similarly, silicon is a common impurity in germanium. No large crystal can be made without impurities; the purest large crystal ever grown was made of germanium. It had about 1010 impurities in each cubic centimetre of material, which is less than one impurity for each trillion atoms.
Impurities often make crystals more useful. In the absence of impurities, α-alumina is colourless. Iron and titanium impurities impart to it a blue colour, and the resulting gem-quality mineral is known as sapphire. Chromium impurities are responsible for the red colour characteristic of rubies, the other gem of α-alumina. Pure semiconductors rarely conduct electricity well at room temperatures. Their ability to conduct electricity is caused by impurities. Such impurities are deliberately added to silicon in the manufacture of integrated circuits. In fluorescent lamps the visible light is emitted by impurities in the phosphors (luminescent materials).
Other imperfections in crystals involve many atoms. Twinning is a special type of grain boundary defect, in which a crystal is joined to its mirror image. Another kind of imperfection is a dislocation, which is a line defect that may run the length of the crystal. One of the many types of dislocations is due to an extra plane of atoms that is inserted somewhere in the crystal structure. Another type, called an edge dislocation, is shown in Figure 5. This line defect occurs when there is a missing row of atoms. In the figure the crystal arrangement is perfect on the top and on the bottom. The defect is the row of atoms missing from region b. This mistake runs in a line that is perpendicular to the page and places a strain on region a.
Dislocations are formed when a crystal is grown, and great care must be taken to produce a crystal free of them. Dislocations are stable and will exist for years. They relieve mechanical stress. If one presses on a crystal, it will accommodate the induced stress by growing dislocations at the surface, which gradually move inward. Dislocations make a crystal mechanically harder. When a metal bar is cold-worked by rolling or hammering, dislocations and grain boundaries are introduced; this causes the hardening.
Determination of crystal structures
Crystal structures are determined by scattering experiments using a portion of the crystal as the target. A beam of particles is sent toward the target, and upon impact some of the particles scatter from the crystal and ricochet in various directions. A measurement of the scattered particles provides raw data, which is then computer-processed to give a picture of the atomic arrangements. The positions are then inferred from the computer-analyzed data.
Max von Laue first suggested in 1912 that this measurement could be done using X rays, which are electromagnetic radiation of very high frequency. High frequencies are needed because these waves have a short wavelength. Von Laue realized that atoms have a spacing of only a few angstroms (1 angstrom [Å] is 10−10 metre, or 3.94 × 10−9 inch). In order to measure atomic arrangements, the particles scattering from the target must also have a wavelength of a few angstroms. X rays are required when the beam consists of electromagnetic radiation. The X rays only scatter in certain directions, and there are many X rays associated with each direction. The scattered particles appear in spots corresponding to locations where the scattering from each identical atom produces an outgoing wave that has all the wavelengths in phase. Figure 6 shows incoming waves in phase. The scattering from atom A2 has a longer path than that from atom A1. If this additional path has a length (AB + BC) that is an exact multiple of the wavelength, then the two outgoing waves are in phase and reinforce each other. If the scattering angle is changed slightly, the waves no longer add coherently and begin to cancel one another. Combining the scattered radiation from all the atoms in the crystal causes all the outgoing waves to add coherently in certain directions and produce a strong signal in the scattered wave. If the extra path length (AB + BC) is five wavelengths, for example, the spot appears in one place. If it is six wavelengths, the spot is elsewhere. Thus, the different spots correspond to the different multiples of the wavelength of the X ray. The measurement produces two types of information: the directions of the spots and their intensity. This information is insufficient to deduce the exact crystal structure, however, as there is no algorithm by which the computer can go directly from the data to the structure. The crystallographer must propose various structures and compute how they would scatter the X rays. The theoretical results are compared with the measured one, and the theoretical arrangement is chosen that best fits the data. Although this procedure is fast when there are only a few atoms in a unit cell, it may take months or years for complex structures. Some protein molecules, for instance, have hundreds of atoms. Crystals of the proteins are grown, and X rays are used to measure the structure. The goal is to determine how the atoms are arranged in the protein, rather than how the proteins are arranged in the crystal.
Beams of neutrons may also be used to measure crystal structure. The beam of neutrons is obtained by drilling a hole in the side of a nuclear reactor. The energetic neutrons created in nuclear fission escape through the hole. The motion of elementary particles is governed by quantum, or wave, mechanics. Each neutron has a wavelength that depends on its momentum. The scattering directions are determined by the wavelength, as is the case with X rays. The wavelengths for neutrons from a reactor are suitable for measuring crystal structures.
X rays and neutrons provide the basis for two competing technologies in crystallography. Although they are similar in principle, the two methods have some differences. X rays scatter from the electrons in the atoms so that more electrons result in more scattering. X rays easily detect atoms of high atomic number, which have many electrons, but cannot readily locate atoms with few electrons. In hydrogen-bonded crystals, X rays do not detect the protons at all. Neutrons, on the other hand, scatter from the atomic nucleus. They scatter readily from protons and are excellent for determining the structure of hydrogen-bonded solids. One drawback to this method is that some nuclei absorb neutrons completely, and there is little scattering from these targets.
Beams of electrons can also be used to measure crystal structure, because energetic electrons have a wavelength that is suitable for such measurements. The problem with electrons is that they scatter strongly from atoms. Proper interpretation of the experimental results requires that an electron scatter only from one atom and leave the crystal without scattering again. Low-energy electrons scatter many times, and the interpretation must reflect this. Low-energy electron diffraction (LEED) is a technique in which a beam of electrons is directed toward the surface. The scattered electrons that reflect backward from the surface are measured. They scatter many times before leaving backward but mainly leave in a few directions that appear as “spots” in the measurements. An analysis of the varied spots gives information on the crystalline arrangement. Because the electrons are scattered strongly by the atoms in the first few layers of the surface, the measurement gives only the arrangements of atoms in these layers. It is assumed that the same structure is repeated throughout the crystal. Another scattering experiment involves electrons of extremely high energy. The scattering rate decreases as the energy of the electron increases, so that very energetic electrons usually scatter only once. Various electron microscopes are constructed on this principle.
|
Students who want to find out things as a scientist, will want to conduct a hands-on investigation. While scientists study a whole area of science, each investigation is focused on learning just one thing at a time. This is essential if the results are to be trusted by the entire science community.
Follow the Scientific Steps below to complete your scientific process for your chosen investigation, How does temperature impact the activity of ants
What do scientists think they already know about the topic? What are the processes involved and how do they work? Background research can be gathered first hand from primary sources such as interviews with a teacher, scientist at a local university, or other person with specialized knowledge. Or use secondary sources such as books, magazines, journals, newspapers, online documents, or literature from non-profit organizations. Don’t forget to make a record of any resource used so that credit can be given in a bibliography.
After gathering background research, the next step is to formulate a hypothesis. More than a random guess, a hypothesis is a testable statement based on background knowledge, research, or scientific reason. A hypothesis states the anticipated cause and effect that may be observed during the investigation.
Consider the following hypothesis:
If ice is placed in a Styrofoam container, it will take longer to melt than if placed in a plastic or glass container. I think this is true because my research shows that a lot of people purchase Styrofoam coolers to keep drinks cool.
The time it takes for ice to melt (dependent variable) depends on the type of container used (independent variable.). A hypothesis shows the relationship among variables in the investigation and often (but not always) uses the words if and then.
- Design Experiment
Once a hypothesis has been formulated, it is time to design a procedure to test it. A well-designed investigation contains procedures that take into account all of the factors that could impact the results of the investigation. These factors are called variables.
There are three types of variables to consider when designing the investigation procedure.
- The independent variable is the one variable the investigator chooses to change.
- Controlled variables are variables that are kept the same each time.
- The dependent variable is the variable that changes as a result of /or in response to the independent variable.
Step A – Clarify Variable
Clarify the variables involved in the investigation by developing a table such as the one below.
Testable Question What detergent removes stains the best? What is changed? (independent variable) Type of detergent, type of stain What stays the same? Type of cloth, physical process of stain removal Data Collected (dependent variable) Stain fading over time for combinations of detergents and stains
Step B – List Materials
Make a list of materials that will be used in the investigation.
Step C – List Steps
List the steps needed to carry out the investigation.
Step D – Estimate Time
Estimate the time it will take to complete the investigation. Will the data be gathered in one sitting or over the course of several weeks?
Step E – Check Work
Check the work. Ask someone else to read the procedure to make sure the steps are clear. Are there any steps missing? Double check the materials list to be sure all to the necessary materials are included.
- Data Collection
After designing the experiment and gathering the materials, it is time to set up and to carry out the investigation.
When setting up the investigation, consider...
The Location Choose a low traffic area to reduce the risk of someone accidentally tampering with the investigation results—especially if the investigation lasts for several weeks. Safety
Avoid harmful accidents by using safe practices.
- The use of construction tools or potentially harmful chemicals will require adult supervision.
- Locate the nearest sink or fire extinguisher as a safety precaution.
- Determine how to dispose of materials. For example, some chemicals should not be mixed together or put down a sink drain.
- Wear protective clothing such as goggles and gloves. Tie back loose hair so that it does not get caught on any of the equipment.
Documentation Making a rough sketch or recording notes of the investigation set up is helpful if the experiment is to be repeated in the future.
Carrying out the investigation involves data collection. There are two types of data that may be collected—quantitative data and qualitative data.
- Uses numbers to describe the amount of something.
- Involves tools such as rulers, timers, graduated cylinders, etc.
- Uses standard metric units (For instance, meters and centimeters for length, grams for mass, and degrees Celsius for volume.
- May involve the use of a scale such as in the example below.
- Uses words to describe the data.
- Describes physical properties such as how something looks, feels, smells, tastes, or sounds.
As data is collected it can be organized into lists and tables. Organizing data will be helpful for identifying relationships later when making an analysis. Using technology, such as spreadsheets, to organize the data can make it easily accessible to add to and edit.
- Analyze Data
After data has been collected, the next step is to analyze it. The goal of data analysis is to determine if there is a relationship between the independent and dependent variables. In student terms, this is called “looking for patterns in the data.” Did the change I made have an effect that can be measured?
Recording data on a table or chart makes it much easier to observe relationships and trends. There are many observations that can be made when looking at a data table. Comparing mean average or median numbers of objects, observing trends of increasing or decreasing numbers, comparing modes or numbers of items that occur most frequently are just a few examples of quantitative analysis.
Besides analyzing data on tables or charts, graphs can be used to make a picture of the data. Graphing the data can often help make those relationships and trends easier to see. Graphs are called “pictures of data.” The important thing is that appropriate graphs are selected for the type of data. For example, bar graphs, pictographs, or circle graphs should be used to represent categorical data (sometimes called “side by side” data). Line plots are used to show numerical data. Line graphs should be used to show how data changes over time. Graphs can be drawn by hand using graph paper or generated on the computer from spreadsheets for students who are technically able.
These questions can help with analyzing data:
- What can be learned from looking at the data?
- How does the data relate to the student’s original hypothesis?
- Did what you changed (independent variable) cause changes in the results (dependent variable)?
- Draw Conclusions
After analyzing the data, the next step is to draw conclusions. Do not change the hypothesis if it does not match the findings.The accuracy of a hypothesis is NOT what constitutes a successful science fair investigation. Rather, Science Fair judges will want to see that the conclusions stated match the data that was collected.
Application of the Results: Students may want to include an application as part of their conclusion. For example, after investigating the effectiveness of different stain removers, a student might conclude that vinegar is just as effective at removing stains as are some commercial stain removers. As a result, the student might recommend that people use vinegar as a stain remover since it may be the more eco-friendly product.
In short, conclusions are written to answer the original testable question proposed at the beginning of the investigation. They also explain how the student used science process to develop an accurate answer.
|
Photograph by Jason Hawkes, National Geographic
Site: Stonehenge, Avebury, and Associated Sites
Location: United Kingdom of Great Britain and Northern Ireland
Year Designated: 1986
Reason for Designation: These prehistoric stone circles are a striking reminder of the architectural, engineering, social, and spiritual sophistication of Britain’s Neolithic peoples.
* * *
The ancient megaliths of Stonehenge and Avebury have stood mute for thousands of years but they speak clearly to the engineering skills and mysterious ritual beliefs of those who built them.
The Neolithic and Bronze Age peoples who inhabited the chalklands of Southern Britain built bank-and-ditch “henge” complexes, improved with stone and arrayed with an eye to celestial schedules, with considerable effort for purposes still largely unknown to us.
Stonehenge began about 3000 B.C. as a circular earthen bank and adjacent ditch. It was improved over thousands of years with timber and later (circa 2600 to 1600 B.C.) with stone.
The circle utilizes blocks weighing over 45 tons and towering up to 24 feet (7.3 meters) high. Some were moved 150 miles (240 kilometers) from the Preseli Hills in Wales, and only a sophisticated society could have accomplished such a feat.
The engineering and architectural skills employed in the actual construction of Stonehenge are mirrored by the ceremonial sophistication evident in the greater site’s overall design. Famously, for example, the Stone Circle and Avenue leading 1.8 miles (3 kilometers) from Stonehenge to the River Avon is built on the axis of the midsummer sunrise. Whether this alignment was constructed for sun worship, calendar keeping, or other purposes remains a matter of much debate.
The Stonehenge World Heritage site includes far more than the iconic circle. It sprawls over 10 square miles (26.6 square kilometers) and includes avenues, settlements, some 350 burial grounds, possible healing centers, and other sites all skillfully designed to blend into a larger pattern of landscape design.
Stonehenge may be the world’s most famous prehistoric megalithic monument but its famed circle of stones is not the largest. That distinction belongs to nearby Avebury, which is also home to the biggest prehistoric mound in Europe—Silbury Hill. The 130-foot-high (39.5-meter-high) mound is made up of half a million tons of chalk that was piled up around 2400 B.C.
Avebury’s circle consists of an enormous henge, an ancient earthwork embankment, and an adjacent ditch with a circumference of about half a mile that was cut by impressive causeways.
The site retains a scattering of massive stones that once formed a massive stone circle inside the henge, as well as remnant interior circles and monuments including avenues of paired, pillar-like stones. Sadly, many of the site’s stones were destroyed by residents of Avebury itself, which lies inside the henge. In an attack on pagan monuments villagers began to topple and bury the ancient stones as early as the 14th century. In the 18th century a far more efficient process saw them systematically broken up and destroyed.
Archaeologist Alexander Keiller set some of this right by excavating and re-erecting many stones in the 1930s. He founded an archaeological museum on the site.
How to Get There
Stonehenge is located about 10 miles (16 kilometers) from Salisbury. It’s accessible by road or via public tour buses, which depart from Salisbury’s rail and bus stations. Avebury is about 25 miles from Bath and 11 from Swindon—from which bus service is available. Trains also service Swindon and Pewesey (10 miles from Avebury).
How to Visit
A walkway surrounds Stonehenge’s famed circle, but due to conservation concerns the public is no longer generally let inside the ring. However, many compensations await. The iconic circle is surrounded by a vast expanse of fields and country cover, perfect walking country dotted with associated earthworks, burial grounds, and other monuments. At Avebury it’s possible to walk a half-mile circuit along the ancient earthwork henge and wander among the stones at will.
When to Visit
Stonehenge and Avebury are open year round. In fact, Avebury’s intimate stones remain open to investigation by history-loving visitors at any time. Stonehenge keeps more regular hours, but there is nothing at all regular about summer solstice observations at the site. Tens of thousands of hippies, Druids, and thrill-seekers of all stripes flock to the site for a raucous annual festival.
2015 Traveler Photo Contest
Explore the top photos, share your favorites, and browse all entries.
|
China has become the first country to successfully land a spacecraft on the far side of the moon. The Chang’e-4 probe has also made the first lunar landing since 1972. It has the task of exploring the side of the moon that never faces earth.
The Chinese probe landed in a huge crater 2500 km in diameter and 13 km deep. The crater is one of the oldest parts of the moon and our solar system.
Scientists hope to learn more about the geology of the far side of the moon. The craft has two cameras on board which will send images back to earth. It will also attempt to send signals to distant regions of space, something that cannot happen on earth because of too much radio noise.
Chang’e-4 also has instruments on board to examine minerals as well as a container with seeds which will try to create a miniature biosphere.
Communication with the spacecraft is not easy. Images and other data must be transmitted to a separate satellite because no direct communication with the earth is possible.
For China the Chang’e-4 mission is an important achievement, because the country has successfully done something no other nation on earth has. It wants to become a leading power in space exploration and has announced plans to send astronauts to the moon and set up its own space station.
The dark side of the moon is older and has a thicker crust than the visible side. It takes the moon as long to rotate on its own axis as it does for one complete orbit around the Earth.
achievement = something important that you have done
announce = to make public; to say officially
attempt = try
axis = imaginary line around which an object turns
biosphere = area in which plants and animals can live
crater = round hole in the ground made by a huge rock that crashed into it
crust = hard top part of a planet
data = information
distant = faraway
explore = to go to unknown places and find out more about them
face = look towards
far side = the side of the moon which we never see
geology = study of the rocks that make up the moon
image = picture
lunar = about the moon
mineral = material that is formed naturally and can be dug out of the ground
miniature = very small
probe = unmanned spacecraft
radio noise = unwanted electric signals
rotate = move around a central point
scientist = person who is trained in science and works in a lab
seed = small hard objects produced by plants from which a new plant is formed
solar system= our sun and the planets that move around it
space exploration = to send spacecraft into other regions of space in order to find out more about them
space station = large spacecraft that stays above the earth and is the basis for people travelling in space or for tests and experiments
The international airport at Kochi in southern India is the first airport in the world to rely solely on solar energy. Last year it won a top environmental award sponsored by the United Nations .
Five years ago airport authorities started looking for new ways to save energy . At first, they put solar panels on the top of one of the passenger terminals. The initial costs were huge, but as time went on solar panels became cheaper. The airport is expected to get back its invested money within the next six years.
Today, over 40,000 solar panels, placed on wide areas of unused land, produce enough energy not only for the airport but for large parts of the city itself. Currently, more than 29 megawatts are produced and output will rise to 40 megawatts, enough to meet the rising energy demands of the city. In addition, the solar panels absorb as much carbon as the planting of 3 million trees
Kochi’s solar-powered airport is only one of India’s projects to increase the use of solar energy and reduce carbon emissions. By 2022 India is expected to increase its solar capacity to 100,000 megawatts.
The project has received attention from several environmental organisations. Especially African countries are considering moving more of their energy production towards solar power.
absorb = here: to take in; to prevent from escaping into the atmosphere
attention = to get interested
authorities = people or organisations that control or are in charge of an area
award = prize
carbon = chemical element; it is in a gas that causes global warming
carbon emissions = gases that are sent into the air and lead to global warming
consider = think about something
currently = now, at the moment
demands = here: what the city needs every day
environmental = about nature and the world around us
especially = above all
in addition = also
increase = to go up
initial = at first, beginning
invested money = the money that you spend on a project when it starts
output = here: energy production
passenger terminal = big building where people wait to get on planes
place = put something somewhere
reduce =go down
rely = depend on something
rise = go up
several = a few, some
solar capacity = here: the amount of energy that the country could produce
solar energy = energy from the sun
solar panel = piece of metal, usually on the roof of a house, which uses the sun’s heat and light to produce electricity
Voyager 2 has become the second man-made object to pass the boundary of the solar system and enter interstellar space. It is currently 18 billion km from earth. Its sister ship, Voyager 1 reached this boundary in 2012.
According toNASA scientists, the probe can operate for five to ten more years. It is so far away from earth that commands take about 16 hours to reach it.
Voyager 2 has entered the heliopause, an area where hot solar winds do not exist any more and the sun’s magnetic field ends. Interstellar space is the vast emptiness between star systems.
The spacecraft is better equipped than its predecessor, Voyager 1. It has instruments on board to measure speed , density and temperature of solar winds. Voyager 1 stopped sending back this data decades ago. Voyager 2 also sends other useful information back to scientists on earth.
The Voyager missions, which were launched in the 1970s have become a great success for NASA. Both craft have traveled beyond their projected destinations. The two spacecraft were originally created to study Jupiter and Saturn more closely. Later, it turned out that Uranus and Neptune could also be examined before the probes left the solar system.
Even though their power sources will eventually stop, the Voyager probes will continue to move on to places no man-made object have gone before.
according to = as said by ..
beyond = farther than
boundary = a line where one are ends and another one starts
command = instruction
craft = spaceship
currently = at the moment
data = information
decade = ten years
density = the relationship between how big something is and how it is filled
equipped = here: it has instruments on board
emptiness = with nothing in it
even though = while
eventually = in the end, finally
examine = to look closely at something and find out more information about it
interstellar space = the area between star systems
launch = send into space
magnetic field = the area around an object that has magnetic power
measure = to find out how high, fast etc.. something is
man-made object = something that is made by people , not by nature
NASA = the American space agency
originally = at first
power source = where the energy to move on comes from
predecessor = here: the spacecraft that was launched before it
probe = unmanned spaceship that has instruments on board
projected destination = the place they were originally planned to go to
reach = get to; arrive at
scientist = a person who is trained in science and works in a lab
solar = from the sun
solar system = our sun and the planets that move around it
German sports car maker Porsche has declared that it would no longer produce diesel cars, but instead concentrate on petrol-powered , electric and hybrid vehicles. It is the first German automaker to completely withdraw from the diesel car sector.
The company made the decision in the aftermath of the emission cheating scandal that hit Porsche’s parent company Volkswagen . In an interview, Porsche’s CEO Oliver Blume said that Porsche’s image had suffereddue to the scandal.
For luxury car manufacturer Porsche, the production of diesel cars has not been that important. In 2017 only 12 % of all Porsche cars produced were diesel-powered. The company has been making diesel cars for 10 years, but since February has stopped taking orders for them. It has never developed or produced any diesel engines of its own.
Porsche is also reacting to the fact that more and more European cities are considering a ban on diesel vehicles in an attempt to reduce air pollution. In addition, the demand for diesel cars is also decreasing.
Currently, the German car maker is investing heavily in new hybrid and electric car technology. Next year it will launch its first fully-electric sports car, the Taycan. By 2025 Porsche expects that every second car it produces will be an an electric sports car.
aftermath = the period of time that has passed after something important happened
attempt = try
ban = to forbid something
concentrate = focus on
consider = think about
currently = at the moment, now
decrease = go down
demand = the number of cars that people want to buy
due to = because of
emission cheating scandal = in 2015 the United States found out that Volkswagen had lied about emission tests on its cars
declare = to say officially
heavily = a lot
hybrid = here: car that has a petrol engine and an electric motor
in addition = also
launch = to start selling
petrol-powered = engine that runs on petrol instead of diesel
Apple has become the first US company to reach a market value of 1 trillion dollars ($1,000,000,000,000) . The hi-tech firm has beaten its rivalsMicrosoft, Google and Amazon to pass the magical mark. Apple’s stock is now worth $207 per share, an all-time high. If it were a country, Apple would rank 17th in the world, on par with Indonesia.
Before Apple, only China’s oil giant PetroChina made it over the 1 trillion dollar mark back in 2007. It’s valuedeclined sharply shortly afterwards when oil prices collapsed.
Apple was founded in 1976 in a California garage by Steve Jobs. In the first two decades the company was famous for producing computers. Later on Apple developed its revolutionary MP3 player, the iPod, which also saved the company from bankruptcy 20 years ago.
The iPhone, the world’s first smartphone, was introduced in 2007 and has become the company’s flagship product. Up to now over 1.3 billion iPhones have been sold. AlthoughApple is currentlyselling fewer new models, sales and profits are rising. It is also making money by selling music and apps.
In 2017, Apple has made profits in the range of $50 billion, selling over $220 billion worth of products .
Apple may soon be joined in the 1 trillion dollar club by other hi-tech giants . Amazon and Microsoft are close to the mark and may be passing it soon.
all-time high = the highest point ever reached
although = while
bankruptcy = situation in which you have no money left and cannot pay back what you owe to others
collapse = here: to go down very quickly
close = near
currently = at the moment
decade = ten years
decline = to go down very fast
develop = to design and produce a new product
flagship = the best and most important product
found- founded = here: to start a new company
introduce = here: to bring to the market
join = to be together with others
market value = what a company is worth on the market
on par = on the same level
profit = the money you get by selling products and services after your costs have been paid
rank = position
reach = get to a certain point
revolutionary = something completely new and different
rival = another company that wants to be more successful than you are
share = a part of a company that belongs to you
stock = the total value of all the company’s shares
One of the greatest mysteries of aviation history happened on March 8, 2014. Four years ago Malaysia Airlines MH370 went missing on a flight from Kuala Lumpur to Beijing. The plane left its programmedflight path and headed south towards the Indian Ocean. During the last four years, several search teams have tried to locate the missing plane, but up to now, it hasn’t been found.
The Malaysian Boeing 777 with 239 passengers on board disappeared from ground stationradar screens but flew on for another six hours. Nobody knows what happened during this time. The last known location of MH370 was somewhere in the southern Indian Ocean near Australia. A few parts of the plane were washed up on Africa’s east coast and on islands in the Indian Ocean
Australia, China and Malaysia have taken part in hi-tech search operations that covered a total area of 120,000 square kilometres and cost $200 million. Now, another search is being conducted by an American firm.
Investigatorsspeculate on what may have happened on board MH370. Some experts state that there may have been some kind of mechanical failure while others consider a suddenloss of oxygen in the cabin and cockpit. Officials do not rule out the possibility of the pilot crashing the plane deliberately in unknown waters.
Aviation inspectors say that it is important to find out what happened to MH 370 in order to prevent such an accident from happening again.
aviation = the science of flying an airplane
conduct = carry out
consider = think about
cover = stretch = reach from one place to another
deliberately = on purpose; if you really want to do something
disappear = here: to be lost; not seen
firm = company
flight path = the course an airplane takes
ground station = here: building that watches and has contact with planes
head = to go in a certain direction
inspector = person who checks to see if something is done the way it should be
investigator = person who has the job of finding out what caused the accident
hi-tech = with the best and most modern technology
locate = to find out where something is
loss = to lose something
mechanical failure = an object or a machine on board the plane did not work the way it should have
official = person in a high position in an organisation
oxygen = element that is in the air and which we need to breathe
possibility = here: something may have happened
prevent = stop from happening again
programmed = here: the course it should have taken, according to flight computers
radar = machine that uses radio waves to find where something is and watch its movements
several = some
speculate = to guess about the possible causes or effects of something without knowing all the facts and details
sudden = something happening quickly
unknown = not known
wash up = when something drifts from the open sea to the coast
The world of technology has got one step closer to creating quantum computers. Dutch scientists have recently created a 2-qubit (quantum bit) processor running on a silicon chip.
While standard computers work with bits of information that can have only two states, 0 or 1, quantum processors are based on the fact that bits can exist in both states at the same time. As a result, they have tremendous computing power and can do things that no classical computer can do. Quantum computers can be used for solving complex problems and can manage much larger number of calculations at once.
Scientists explain that they are still in the early stages of developing a real quantum processor. Hardware manufacturer IBM has already built a 50-qubit computer, but with superconductive materials that need extreme cooling. Putting a quantum processor on a silicon chip, which is already used in the computer industry, may be the first step toward mass production.
In such quantum processors, electrons can be in many states at once. This is called superposition. In the lab, scientists have managed to keep electrons between both positions at the same time, however, such electrons are not stable and quickly fall apart. By linking these electrons together on a silicon chip qubit hardware manufacturers could produce quantum processors for commercial use.
bit = the smallest unit of information that a computer uses
calculation = when you use numbers to find out something
classical = here: normal; the ones we have today
commercial use = here: something that is produced so that people buy it
complex = very complicated
electron = very small piece of matter with a negative electrical charge that moves around the central part of an atom
hardware manufacturer = company that producers computers
however = but
lab = laboratory
link = connect with each other
manage = work with; succeed in doing something
mass production = products that are produced in factories in large numbers so they can be sold cheaply
processor = central part of a computer that deals with commands and the information it is given
quantum = unit of energy in physics
qubit= quantum bit = piece of information that can exist in two states at the same time
scientist = person who is trained in science and works in a lab
silicon chip = small piece of silicon that has electrical connections and can store information
stable = steady; something that does not change
stage = phase, time during which something happens
superconductive = when electricity can flow through a material very easily, especially at low temperatures
SpaceX, a private space transport company owned by American billionaire Elon Musk, has launched the world’s most powerful rocket, the Falcon Heavy. It is the larger version of the Falcon 9 rocket, which has successfully been putting payloads into space for years. The booster is made up of three rocketsstrapped together for combined thrust.
The Falcon Heavy was developed over a period of 7 years and cost about $500 million. It is 23 stories tall and has 27 engines. The rocket’s thrust equals that of 18 Boeing 747 jumbo jets.
The new rocket is designed to send large satellites into earth orbit. It can carry 64 metric tons, twice the payload of other rockets, into space at a lower price. The rocket can also transport spacecraft to destinations further away from earth. In addition, its starting boosters are reusable.
The first flight was set to travel as far as the asteroid belt between Mars and Jupiter. The rocket had an electric sports car, Elon Musk’s privateTesla Roadster, on board.
SpaceX has been flying NASA cargo missions to the International Space Station for a few years. The company wants to compete with other business in carrying payloads to space. Musk’s company has several commercial customers and has been receivingcontracts to fly government payloads. The first manned mission to space is planned for the end of 2018.
asteroid belt = large rocks that move around the sun between Mars and Jupiter
billionaire = person who has a billion dollars, euros etc.. or more
booster = a rocket that gives you the power to leave the earth
cargo mission = here: a trip that carries goods and other products into space
commercial = here: private
compete = be as good as or better than others
contract = an agreement to do a job for someone
design = the way something should work
destination = place you want to go to
develop = to design and create a new product
equal = the same as
in addition = also
International Space Station = ISS = space station that was built by scientists from 16 countries; it is mainly used for scientific experiments
launch = to send an object into space
manned mission = here: trip that sends people into space
orbit = move around an object in a circle
payload = the goods or products that a machine transports to a place
receive = get
reusable = you can use it again
several = some, a few
stories = floors
strapped together = bind together so that it works as one
successfully = if something works the way it should
thrust = power of an engine that makes a rocket move upwards
Amazon has opened its first automated store to the public. Amazon Go is a grocery store located on the ground level of the corporation’s Seattle headquarters.
The store, which offers food, salads and boxed meals could revolutionize our shopping experience in the future.
As soon as you arrive at the store, a cell phone app connected to an Amazon account registers your presence. Everything that happens in the store is tracked by hundreds of infrared cameras. When you pick up items from the shelves they are automaticallyput into yourvirtualshopping cart. The cameras also detect when you put an item back on the shelves and remove it from your cart. The moment you walk out your account is charged without making any physicalpayment.
The technologies used at Amazon Go are the same as with driverless cars – computer tracking, weight sensors on shelves and complicatedalgorithms.
The 1,800 square foot store has been open to Amazon employees for a year. Now the public can also shop there.
However, not everything has been running smoothly in the store’s opening year.There were hardships to overcome. For example, it’s hard for cameras to distinguish between different flavours or products that look the same. They also have problems handling people who move around or identifying shoppers with similar body sizes and clothes.
Even though there are no checkout counters and cashiers who make you wait in line, there are shop assistants who restock goods and help customers find their way around.
account = here: the services of a company that you use
algorithm = set of instructions that are followed in a fixed order and used for solving problems
automated = using computers, cameras and machines
boxed meal = meal that has already been cooked and is ready to eat
cashier = person who you pay money to in a shop
charge = you have to pay money for the goods you buy
checkout counter = place where you pay for products when you leave a store
complicated = difficult
connect = link
corporation = big company
detect = discover, notice
distinguish = find the difference between two things
driverless = without a driver
employee = a person who works there
even though = although, while
flavour = the taste of something
hardships = problems, difficulties
headquarters = place from where a company operates
however = but
identify = find out the name of someone
infrared = light that gives out heat but cannot be seen
item = product
located = to be found; situated
overcome = solve
physical payment = real money you pay with in a store
presence = someone is present
public = here: everybody
register = record
restock = bring in more items to replace those that have been bought
revolutionize = to change completely the way you do something
shopping experience = the way we shop
similar = almost the same
smoothly = here: the way it should; without any problems
track = watch closely
virtual = not real; here: your Amazon account
weight sensor = small object that finds out if there is something on the shelf or not
The Ford Motor Company has revealed plans to invest over $11 billion dollars in the development and production of electric cars by 2022. The announcement was made public at the Detroit Motor Show.
The American carmaker plans to produce 16 fully battery-driven vehicles and 24 hybrid cars by 2022. At the moment the Focus is the only Ford car that can be driven by batteries alone.
Apart from producing electric-driven cars for the North American market, Ford also aims at increasing sales to China, the largest growing car market in the world. In addition, it wants to become the world’s leader in fuel-efficient trucks. The car producer also plans to bring a battery-drivenSUV on the market by 2020.
Instead of creating completely new electric vehicles from scratch, Ford wants to electrify cars that are already popular because people will know what they get and buy more easily.
Automobile manufacturers around the world are under pressure to develop electric cars because many large countries, including China, India, France and the U.K. have said they would phase out vehicles powered by internal combustion engines within the next two decades. They also face fiercecompetition from companies like Tesla, a car-maker that specialises in innovative technologies.
As battery costs are going down rapidly, carmakers may find it easier to produce electric cars with higher mileage and at cheaper prices.
aim = target , plan
announcement = official statement
apart from = other than
battery-driven = run by a battery
billion = a thousand million
competition = trying to be more successful than other companies
decade = ten years
development = working on a new product
electrify = make electric
fierce = here: strong
from scratch = to start something from the beginning without using anything that has existed before
fuel-efficient = car that burns fuel in a more effective way than usual; it does not need as much fuel as others do
fully = completely
higher mileage = here: to make an electric car that can travel more miles or kilometres before you have to recharge it
hybrid car = a car that has both a petrol or diesel engine and an electric motor
in addition = also
innovative = new way of doing something; often better than existing methods
instead of = in something’s place
internal combustion engine = engine that produces power by burning petrol or diesel; it is used in most cars
invest = spend money on …
make public = to say something for everyone to hear
manufacturer = producer
phase out = to slowly stop using or producing something
popular = well-known and liked by many people
production = here: making cars
rapidly = quickly
reveal = announce to many people
sales = selling cars
SUV = sport-utility vehicle = car that is bigger and is made for travelling over rough ground; mostly with a 4-wheel drive
under pressure = to make someone do something by using arguments and threats
vehicle = a machine with a motor that is used to take people or things from one place to another
|
Before you read this, I suggest you read post 19.10.
Foxes eat other animals (they are predators), including rabbits: rabbits are eaten by other animals (they are prey), including foxes, but rabbits eat only plants. Now let’s imagine an island that contains foxes and rabbits; there are no other animals for the foxes to eat and the rabbits have an unlimited supply of plants for food. There are no other animals or plants that can harm rabbits or foxes in any way. The science of how animals and plants interact with their environment, including other animals and plants, is called ecology. And a system in which animals and plants coexist with their environment is called an ecosystem. So, our island is a simple model for an ecosystem.
Let’s suppose that at time t, the number of rabbits in our ecosystem is R and the number of foxes is F. Then the rate of increase in the number of rabbits, dR/dt, is proportional to R (see post 18.15). But, at the same time the rate of decrease in the number of rabbits, -dR/dt, is proportional to F (more foxes to eat rabbits) and R (the greater the value of R the more rabbits can be eaten). So the overall increase in the rabbit population is
dR/dt = AR – BFR (1a)
where A and B are constants.
Now let’s think about the number of foxes. The rate of increase in the number of foxes, dF/dt, depends on how much food they eat, BFR (see previous paragraph) but the rate of decrease of the fox population, -dF/dt, is proportional to F (because the more foxes there are the more of them need to share the available food. So overall the increase in the fox population is
dF/dt = BFR – CF (1b)
where C is a constant.
Equations 1a and 1b are examples of coupled differential equations because we cannot solve one without reference to the other. These two coupled differential equations, that provide a simple model for the populations of predators and prey are called the Lotka-Volterra equations. Alfred Lotka (1880-1949) was born in the Ukraine but moved to England, where he obtained his undergraduate degree from the University of Birmingham. He then moved to the USA where he did research in mathematical biology and on using statistical ideas from physics (see posts 16.38 and 20.26) to investigate the economy. His research ideas were well ahead of his time. Initially he worked as a patent examiner (like Einstein) but later became statistician to an insurance company. He turned down offers of many university positions, perhaps because he could earn more at the insurance company! Vito Volterra (1860-1940) was an Italian mathematician with a much more conventional background – although he lost his job at the university for opposing the fascist leader Benito Mussolini. Lotka and Volterra developed equations 1a and 1b independently at about the same time.
Now let’s suppose that R and F remain constant, in other words dR/dt = dF/dt = 0. Then, from equations 1a and 1b
AR’ – BR’F’ = 0 (2a)
BF’R’ – CF’ = 0 (2b)
where R’ and F’ are the steady state (constant) values of R and F. Adding equations 2a and 2b gives
AR’ – CF’ = 0 or F’ = AR’/C. (3)
From equations 2a and 3
F’(C – CBF’/A) = 0.
If F’ ≠ 0 (in other words, there is a non-zero number of foxes)
C – CBF’/A = 0 or F’ = A/B. (4)
And, from equations 3 and 4
R’ = F’C/A = C/B. (5)
Is there any reason to believe that, at the beginning, the number of foxes and rabbits on our island was related to the constants in the Lotka-Volterra equation by equations 4 and 5? No! So, these results may be interesting but they’re not very important.
More general solutions of the Lotka-Volterra equations can become very complicated. So I want to make a simple assumption and look at its consequences. Let’s suppose that the number of rabbits differs from R’ by only a small number r. And that the number of foxes differs from F’ by only a small number f. Then the number of rabbits and foxes oscillates and repeats itself in a time period of
T = 2π/(AC)1/2 (7)
as explained in appendix 1. Then the population of rabbits oscillates as shown in the picture below. In this graph the rabbit population is expressed as a fraction of R’. To plot the graph, I used values of A = 1 year-1 and C = 1 year-1; also, I decided the maximum value for r would be r0 = 0.1R’. There are no special reasons for these values – I simply needed some reasonable numbers to plot a graph. For more details about the graph, see appendix 2.
If you were to observe the rabbit population on the island during time period 2 > t < 4 years, you might be alarmed about the decline in the rabbit population. What has gone wrong for the number of rabbits to decrease in this way? The answer is that nothing has gone wrong. The decrease is a natural consequence of the oscillatory rabbit population. The fox population is also oscillatory – I hope I’ve given enough information (in the appendices) for you to plot a graph, like the one above, for foxes.
Of course, real ecosystems are much more complicated than this simple model. But the model does show that populations can be stable even if their numbers oscillate – in much the same way that a moving object has dynamic stability but is not in equilibrium (see post 22.9). But we must be a bit careful. Our model assumes that R is never very different from R’ (because we assume that r is small compared with R’). But a sustained fall in R to well below R’ is a cause for concern.
As increasing numbers of people are concerned about our environment the word “ecology” is increasingly used to mean caring for the world we live in and an “ecologist” can simply mean someone who is concerned for our environment. I believe it these concerns are very important. But we shouldn’t believe that an ecosystem should remain static. Indeed, a healthy ecosystem may be self-correcting, as suggested by the Gaïa hypothesis of the British scientist James Lovelock (1919-2022). However, if an ecosystem is subjected to serious abuse, it is unlikely to be able to correct itself. As an extreme example if most of the world’s water sources were contaminated by high concentrations of toxic waste, everything would die. The result (universal death) might remain stable but it wouldn’t be what we normally consider an ecosystem.
Oscillatory solution to the Lotka-Volterra equations
Let R = R’ + r (r is much less than R’) and F = F’ + f (f is much less than F’).
Then equations 1a and 1b become
d(R’+r)/dt = A(R’ + r) – B(F’ + f)(R’ + r) (7a)
d(F’ + f)/dt = B(F’ + f)(R’ + r) – C(F’ + f) (7b)
Noting that F’ is a constant, equation 7b becomes
df/dt = (F’ + f)[ B(R’ + r) – C].
Now I’m going to substitute expressions for F’ and R’ from equations 4 and 5 into this result, to give
df/dt = AC/B + Ar – AC/B + Bf(C/B) –Cf = Ar. (8)
The derivation above assumes that fr is so small that it is negligible.
We now have an expression for df/dt that depends only on the number of rabbits. So let’s return to equation 7a and look at the rabbit population. Noting that R’ is a constant and assuming that fr is negligible, this equation becomes
dr/dt = A(R’ + r)– B(R’F’ + R’f + rF’).
Now we’re going to get rid of a lot of constants by differentiating, with respect to time, once more to give
d2r/dt2 = A(dr/dt) – BR’(df/dt) – BF’(dr/dt)
and then substitute expressions for F’ and R’ from equations 4 and 5 into this result, to give
d2r/dt2 = A(dr/dt) – (BC/B)(df/dt) – (BA/B)(dr/dt) = – C(df/dt). (9)
From equations 8 and 9
d2r/dt2 = – ACr. (10)
Equation 9 is the equation of a simple harmonic oscillator whose time period is given by equation 7.
Plotting a graph of the rabbit population against time
If the rabbit population oscillates about R’ and r is initially zero, following what we know about simple harmonic oscillators, we can write that
r = r0sinωt = r0sin(2π/T)t
where r0 is the maximum value of r, ω is the angular frequency of the oscillation and T is its time period.
Substituting for T from equation 6 gives
r = r0sin([AB]1/2t).
Remember that R = R’ + r.
|
External morphology of Lepidoptera
The external morphology of Lepidoptera is the physiological structure of the bodies of insects belonging to the order Lepidoptera, also known as butterflies and moths. Lepidoptera are distinguished from other orders by the presence of scales on the external parts of the body and appendages, especially the wings. Butterflies and moths vary in size from microlepidoptera only a few millimetres long, to a wingspan of many inches such as the Atlas moth. Comprising over 160,000 described species, the Lepidoptera possess variations of the basic body structure which has evolved to gain advantages in adaptation and distribution.
Lepidopterans undergo complete metamorphosis, going through a four-stage life cycle: egg; larva or caterpillar; pupa or chrysalis; and imago (plural: imagines) / adult. The larvae – caterpillars – have a toughened (sclerotised) head capsule, chewing mouthparts, and a soft body, that may have hair-like or other projections, 3 pairs of true legs, and up to 5 pairs of prolegs. Most caterpillars are herbivores, but a few are carnivores (some eat ants, aphids or other caterpillars) or detritivores. Larvae are the feeding and growing stages and periodically undergo hormone-induced ecdysis, developing further with each instar, until they undergo the final larval–pupal moult. The larvae of many lepidopteran species will either make a spun casing of silk called a cocoon and pupate inside it, or will pupate in a cell under the ground. In many butterflies, the pupa is suspended from a cremaster and is called a chrysalis.
The adult body has a hardened exoskeleton, except for the abdomen which is less sclerotised. The head is shaped like a capsule with appendages arising from it. Adult mouthparts include a prominent proboscis formed from maxillary galeae, and are adapted for sucking nectar. Some species do not feed as adults, and may have reduced mouthparts, while others have them modified for piercing and suck blood or fruit juices. Mandibles are absent in all except the Micropterigidae which have chewing mouthparts. Adult Lepidoptera have two immobile, multi-faceted compound eyes, and only two simple eyes or ocelli, which may be reduced. The three segments of the thorax are fused together. Antennae are prominent and besides the faculty of smell, act as olfactory radar, and also aid navigation, orientation and balance during flight. In moths, males frequently have more feathery antennae than females, for detecting the female pheromones at a distance. There are two pairs of membranous wings which arise from the mesothoracic (middle) and metathoracic (third) segments; they are usually completely covered by minute scales. The two wings on each side act as one by virtue of wing-locking mechanisms. In some groups, the females are flightless and have reduced wings. The abdomen has ten segments connected with movable inter-segmental membranes. The last segments of the abdomen form the external genitalia. The genitalia are complex and provide the basis for family identification and species discrimination.
The wings, head parts of thorax and abdomen of Lepidoptera are covered with minute scales, from which feature the order 'Lepidoptera' derives its names, the word "lepidos" in Ancient Greek meaning 'scale'. Most scales are lamellar (blade-like) and attached with a pedicel, while other forms may be hair-like or specialised as secondary sexual characteristics. The lumen, or surface of the lamella, has a complex structure. It gives colour either due to the pigments contained within it or through its three-dimensional structure. Scales provide a number of functions, which include insulation, thermoregulation and aiding gliding flight, amongst others, the most important of which is the large diversity of vivid or indistinct patterns they provide which help the organism protect itself by camouflage, mimicry, and to seek mates.
- 1 External morphology
- 2 Head
- 3 Thorax
- 4 Scales
- 5 Abdomen
- 6 Development
- 7 Defense and predation
- 8 See also
- 9 Footnotes
- 10 External links
In common with other members of the superorder Holometabola, Lepidoptera undergo complete metamorphosis, going through a four-stage life cycle: egg, larva / caterpillar, pupa / chrysalis, and imago (plural:imagines) / adult.
Lepidopterans range in size from a few millimetres in length, such as in the case of microlepidoptera, to a wingspan of many inches, such as the Atlas moth and the world's largest butterfly Queen Alexandra's Birdwing.:246
General body plan
The body of an adult butterfly or moth (imago) has three distinct divisions, called tagmata, connected at constrictions; these tagmata are the head, thorax and abdomen. Adult lepidopterans have four wings: a forewing and a hindwing on both the left and the right side of the thorax and, like all insects, have three pairs of legs.
- Head: The head has large compound eyes and if mouthparts are present, they are almost always a drinking straw-like proboscis.
- Scales: Scales cover the external surface of the body and appendages.
- Thorax: The prothorax is usually reduced.
- Wings: Two pairs of wings are present in almost all taxa. The wings have very few cross-veins.
- Abdomen: The posterior abdominal segments are extensively modified for reproduction. Cerci are absent.
- Larva: Lepidoptera larvae are known as caterpillars, and have a well-developed head and mandibles. They can have from 0 to 5 pairs of prolegs, usually 4.
- Pupa: The pupae in most species are adecticous (with no functional mandibles in the pupal state) and obtect (with appendages fused or glued to the body), while others are decticous (with functional mandibles present in the pupal state) and exarate (having the antennae, legs, and wings free).
Distinguishing taxonomic features
The chief characteristics used to classify lepidopteran species, genera and families are:
- the mouthparts
- the shape and venation of the wings
- whether the wings are homoneurous (the venation of the forewings and hind wings alike) or heteroneurous (forewings and hind wings different)
- whether the wings are aculeate (more or less covered with specialized bristles called microsetae) or nonaculeate
- the type of wing coupling (jugate or frenate)
- the anatomy of the reproductive organs
- the structure of larva and position of primary setae
- whether the pupa is exarate or obtect
The morphological characteristics of caterpillars and pupae used for classification are completely different from that of adults;:637 different classification schemes are sometimes provided separately for classifying adults, larvae and pupae.:28–40 The characteristics of immature stages are increasingly used for taxonomic purposes as they provide insights into systematics and phylogenies of Lepidoptera that are not apparent from examination of adults.:28
Like all animal heads, the head of a butterfly or moth contains the feeding organs and the major sense organs. The head typically consists of two antennae, two compound eyes, two palpi and a proboscis. Lepidoptera have ocelli which may or may not be visible. They also have sensory structures called chaetosemata, the functions of which are largely unknown. The head is filled largely by the brain, the sucking pump and its associated muscle bundles. Unlike the adults, the larvae have one-segmented mandibles.
The head capsule is well sclerotised and has a number of sclerites or plates, separated by sutures. The sclerites are difficult to distinguish from sulci (singular – sulcus) which are secondary thickenings. The regions of the head have been divided into a number of areas which act as a topographical guide for description by lepidopterists but cannot be discriminated in terms of their development. The head is covered by hair-like or lamellar scales and found either as tufts on the frons or vertex (referred to as rough-scaled) or pressed close to the head (referred to as smooth-scaled).
The sensory organs and structures on the head show great variety, and the shape and form of these structures, as also their presence or absence, are important taxonomic indicators for classifying taxa into families.
Head of a moth of family Gracillariidae showing extent of scales on the head
Antennae are prominent paired appendages that project forwards between the animal's eyes and consist of a number of segments. In the case of butterflies, their length varies from half the length of the forewing to three-quarters of the length of the forewing. The antennae of butterflies are either slender and knobbed at the tip and, in the case of the Hesperiidae, are hooked at the tip. In some butterfly genera such as Libythea and Taractrothera the knob is hollowed underneath. Moth antennae are either filiform (thread-like), unipectinate (comb-like), bipectinate (feather-like), hooked, clubbed or thickened.:636 Some moths have knobbed antennae akin to those of butterflies,including the families Castniidae, Neocastniidae and Euschemonidae.
Antennae are the primary organs of olfaction (smell) in Lepidoptera. The antenna surface is covered with large numbers of olfactory scales, hairs or pits; as many as 1,370,000 are found on the antennae of a Monarch. Antennae are extremely sensitive; the feathered antennae of male moths from the Saturniidae, Lasiocampidae and many other families are so sensitive that they can detect the pheromones of female moths from distances of up to 2 km (1.2 mi) away. Lepidoptera antennae can be angled in many positions. They help the insect in locating the scent and can be considered to act as a kind of 'olfactory radar'. In moths, males frequently have antennae which are more feathery than those of the females, for detecting the female pheromones at a distance. Since females do not need to detect the males, they have simpler antennae. Antennae have also been found to play a role in the time-compensated sun compass orientation in migratory Monarch butterflies.
Lepidoptera have two large, immovable compound eyes which consist of a large number of facets or lenses, each connected to a lens-like cylinder which is attached to a nerve leading to the brain. Each eye may have up to 17,000 individual light receptors (ommatidia) which in combination provide a broad mosaic view of the surrounding area. One tropical Asian family, the Amphitheridae, has compound eyes divided into two distinct segments. The eyes are usually smooth but may be covered by minute hairs. The eyes of butterflies are usually brown, golden-brown or even red as in the case of some species of skippers.
While most insects have three simple eyes, or ocelli, only two ocelli are present in all species of Lepidoptera, except a few moths, one on each side of the head near the edge of the compound eye. On some species, sense organs called chaetosemata are found near the ocelli. The ocelli are not homologous to the simple eyes of caterpillars which are differently named as stemmata. The ocelli of Lepidoptera are reduced externally in some families; where present, they are unfocussed, unlike stemmata of larvae which are fully focussed. The utility of ocelli is not understood at present.
Butterflies and moths are able to see ultra-violet (UV) light, and wing colours and patterns are principally observed by Lepidoptera in this region. The patterns seen on their wing under UV light differ considerably from those seen in normal light. The UV patterns act as visual cues which help differentiate between species for the purpose of mating. Studies have been carried out on Lepidoptera (mostly butterflies) wing patterns illuminated by UV light.
Typically, the labial palpi are prominent, 3-segmented, springing from under the head and curving up in front of the face. There is great variation in morphology of labial palpi in different families of Lepidoptera; sometimes the palpi are separate and sometimes they are connivent and form a beak, but they are always independently movable. In other cases, the labial palpi may not be erect but 'porrect' (projecting forward horizontally). Palpi consist of a short basal segment, a comparatively long central segment and a narrow terminal portion. The first two segments are densely scaled and may be hirsute; the terminal segment is bare. The terminal segment may be blunt or pointed; it may project straight or at an angle from the second segment inside which it may be concealed.
While mandibles or 'jaws' (chewing mouthparts) are only present in the caterpillar stage, the mouthparts of most adult Lepidoptera mainly consist of the sucking kind; this part is known as the proboscis or 'haustellum'. A few Lepidoptera species have reduced mouthparts and therefore do not feed in the adult state. Others, such as the basal family Micropterigidae, have mouthparts of the chewing kind.
The proboscis (plural – proboscises) is formed from maxillary galeae and is adapted for sucking nectar. It consists of two tubes held together by hooks and separable for cleaning. Each tube is inwardly concave, thus forming a central tube up which moisture is sucked. Suction is effected through the contraction and expansion of a sac in the head. The proboscis is coiled under the head when the insect is at rest and extended only when feeding. The maxillary palpi are reduced and even vestigial. They are conspicuous and 5-segmented in some of the more basal families and are often folded.
The shape and dimensions of the proboscis have evolved to give different species a wider and therefore more advantageous diet. There is an allometric scaling relationship between body mass of Lepidoptera and length of proboscis from which an interesting adaptive departure is the unusually long-tongued hawk moth Xanthopan morgani praedicta. Charles Darwin predicted the existence and proboscis length of this moth before its discovery based on his knowledge of the long-spurred Madagascan star orchid Angraecum sesquipedale.
There are primarily two feeding guilds in Lepidoptera – the nectarivorous who obtain the majority of their nutritional requirements from floral nectar and those of the frugivorous guild who feed primarily on juices of rotting fruit or fermenting tree sap. There are substantial differences between the morphology of the proboscises of both feeding guilds. Hawkmoths (family Sphingidae) have elongated proboscises which enable them to feed on and pollinate flowers with long tubular corrollas. Besides this, a number of taxa (especially noctuid moths) have evolved different proboscis morphologies. Certain noctuid species have developed piercing mouthparts; the proboscis has sclerotised scales on the tip which to pierce and suck blood or fruit juices. Proboscises in some Heliconius species have evolved to consume solids such as pollen. Some other moths, mostly noctuids, have modified proboscises to suit their mode of nutrition – lachrymophagy (feeding on tears of sleeping birds). The proboscises often have sharp apices as well as a host of barbs and spurs on the stem.
A nymphalid butterfly sucking on a banana.
Sara Longwing Heliconius sara, one of many Heliconius species known to feed on pollen, with pollen on its proboscis.
Lachryphagous Lepidoptera, such as the two Julia Butterflies (Dryas iulia) drinking the tears of turtles in Ecuador, have hooks and barbs at the tip of the proboscis.
The thorax, which develops from segments 2, 3 and 4 of the larva, consists of three invisibly divided segments, namely prothorax, metathorax and mesothorax. The organs of insect locomotion – the legs and wings – are borne on the thorax. The forelegs spring from the prothorax, the forewings and middle pair of legs are borne on the mesothorax, and the hindwings and hindlegs arise from the metathorax. In some cases, the wings are vestigial.
The upper and lower parts of the thorax (sterna and terga respectively) are composed of segmental and intrasegmental sclerites which display secondary sclerotisation and considerable modification in the Lepidoptera. The prothorax is the simplest and smallest of the three segments while the mesothorax is the most developed.
Between the head and thorax is the membranous neck or cervix. It comprises a pair of lateral cervical sclerites and is composed of both cephalic and thoracic elements.:71 Between the head and the thorax is a tufted scale called the pronotum. On either side is a shield-like scale called a scapula. In the Noctuoidea, the metathorax is modified with a pair of tympanal organs.
Fore-legs in the Papilionoidea exhibit reduction of various forms: the butterfly family Nymphalidae, or brush-footed butterflies as they are commonly known, have only the rear two pairs of legs fully functional with the forward pair strongly reduced and not capable of walking or perching. In the Lycaenidae, the tarsus is unsegmented, as the tarsomeres are fused, and, tarsal claws are absent. The aroliar pad ( a pad projecting between the tarsal claws of some insects) and pulvilli (singular : pulvillus, a pad or lobe beneath each tarsal claw) are reduced or absent in the Papilionidae. The tarsal claws are also absent in the Riodinidae.
- See glossary for terms used
Adult Lepidoptera have two pairs of membranous wings covered, usually completely, by minute scales. A wing consists of an upper and lower membrane which are connected by minute fibres and strengthened by a system of thickened hollow ribs, popularly but incorrectly referred to as 'veins', as they may also contain tracheae, nerve fibres and blood vessels. The membranes are covered with minute scales which have jagged ends or hairs and are attached by hooks. The wings are moved by the rapid muscular contraction and expansion of the thorax.
The wings arise from the meso- and meta-thoracic segments and are similar in size in the basal groups. In more derived groups, the meso-thoracic wings are larger with more powerful musculature at their bases and more rigid vein structures on the costal edge.
Besides providing the primary function of flight, wings also have secondary functions of self-defence, camouflage and thermoregulation. In some Lepidoptera families such as the Psychidae and Lymantriidae, the wings are reduced or even absent (often in the female but not the male).
The shape of wings exhibits great variety in Lepidoptera. In the case of the Papilionoidea, the costa may be straight or highly arched. It is sometimes concave on the hindwing. It is occasionally serrate or minutely saw-toothed on the forewing. The apex may be rounded, pointed or falcate (produced, and concave below). The termen tends to be straight or concave on the forewing while it is usually more or less convex on the hindwing. The termen is often crenulate or dentate, i.e. produced at each vein and concave in between them. The dorsum is normally straight but may be concave.
The hindwing is frequently caudate, i.e. the veins near the end of the tornus have one or more tails. The tornus itself being often produced and frequently lobed. Along the hindwing termen there are tightly-packed scales in a double row. The underside of the scales project and form a regular narrow fringe referred to as cilia.
The plume moths (Family Pterophoridae) have split wings.
In the many-plumed moths (Family Alucitidae), wings are split along each vein.
Pachyerannis obliquaria, mating pair. Winged male above, small wingless female below.
Tubular veins run through the two-layered membranous wing. Veins are connected to the haemocoel and in theory allow haemolymph to flow through them. In addition, a nerve and trachea may pass through the veins.
Lepidopteran venation is simple in that there are few crossbars.:88 The wing venation in Lepidoptera is a diagnostic for distinguishing between the taxa as also the genera and families. The terminology is based on the Comstock-Needham system which gives the morphological description of insect wing venation. In the basal Lepidoptera, the venation of the forewing is similar to that of the hindwing; a condition referred to as "homoneurous". The Micropterigidae (Zeugloptera) have venation that resembles the most primitive caddisflies (Trichoptera). All other Lepidoptera, the vast majority (around 98%), are "heteroneurous", the venation of the hindwing differing from that from the forewing and being sometimes reduced. Moths of the families Nepticulidae, Opostegidae, Gracillariidae, Tischeriidae and Bucculatricidae, amongst others, often have greatly reduced venation in both wings.:635:56 Homoneurous moths tend to have the "jugum" form of wing-coupling as opposed to the "frenulum–retinaculum" arrangement in the case of more advanced families.
The Lepidoptera have developed a wide variety of morphological wing-coupling mechanisms in the imago which render these taxa "functionally dipterous". All but the most basal forms exhibit this wing coupling. There are three different types of mechanisms – jugal, frenulo–retinacular and amplexiform.
The more primitive groups have an enlarged lobe-like area near the basal posterior margin (i.e. at the base of the forewing) called a jugum, that folds under the hindwing during flight. Other groups have a frenulum on the hindwing that hooks under a retinaculum on the forewing.
In all butterflies (with the exception of male Euschemoninae) and in Bombycoidea moths (with the exception of the Sphingidae), there is no arrangement of frenulum and retinaculum to couple the wings. Instead, an enlarged humeral area of the hindwing is broadly overlapped by the forewing. Despite the absence of a specific mechanical connection, the wings overlap and operate in phase. The power stroke of the forewing pushes down the hindwing in unison. This type of coupling is a variation of frenate type but where the frenulum and retinaculum are completely lost.
The wings of Lepidoptera are minutely scaled, which gives the name to this order; the name 'Lepidoptera' was coined in 1735 by Carl Linnaeus for the group of "insects with four scaly wings". It is derived from Ancient Greek lepidos or λεπίδος (scale), itself originating from the Greek lepis (female genitive singular form lepidos) meaning "(fish) scale" (and related to lepein "to peel") and pteron or πτερόν (wing).
Scales also cover the head, parts of the thorax and abdomen as well as parts of the genitalia. The morphology of scales has been studied by Downey & Allyn (1975) and scales have been classified into three groups, namely hair-like, or piliform, blade-like, or lamellar and other variable forms.
A few taxa of the Trichoptera (caddisflies), which are the sister group to the Lepidoptera, have hair-like scales, but always on the wings and never on the body or other parts of the insect. Caddisflies also possess caudal cerci on the abdomen, a feature absent in the Lepidoptera. According to Scoble (2005), "morphologically, scales are macrotrichia, and thus homologous with the large hairs (and scales) that cover the wings of Trichoptera (caddisflies)".
Although there is great diversity in scale form, they all share a similar structure. Scales, like other macrochaetes, arise from special trichogenic (hair-producing) cells and have a socket which is enclosed in a special 'tormogen' cell;:9 this arrangement provides a stalk or pedicel by which scales are attached to the substrate. Scales may be piliform (hairlike) or flattened. The body or 'blade' of a typical flattened scale consists of an upper and lower lamella with an air-space in between. The surface towards the body is smooth and known as the inferior lamella. The upper surface, or superior lamella, has transverse and longitudinal ridges and ribs. The lamellae are held apart by struts called trabaculae and contain pigments which give colour. The scales cling somewhat loosely to the wing and come off easily without harming the butterfly.
The scales on butterfly wings are pigmented with melanins that can produce the colours black and brown. The white colour in the butterfly family Pieridae is a derivative of uric acid, an excretory product.:84 Bright blues, greens, reds and iridescence are usually created not by pigments but through the microstructure of the scales. This structural coloration is the result of coherent scattering of light by the photonic crystal nature of the scales. The specialised scales that provide structural colours to reflected light mostly produce ultra-violet patterns which are discernible in that part of the ultra-violet spectrum that Lepidopteran eyes can see. The structural colour seen is often dependent upon the angle of view. For example, in Morpho cypris, the colour from the front is a bright blue but when seen from an angle changes very quickly to black.
The iridescent structural coloration on the wings of many lycaenid and papilionid species, such as Parides sesostris and Teinopalpus imperialis, and lycaenids such as Callophrys rubi, Cyanophrys remus, and Mitoura gryneus, has been studied. They manifest the most complex photonic scale architectures known – regular three-dimensional periodic lattices, that occur within the lumen of some scales. In the case of the Kaiser-i-Hind (Teinopalpus imperialis), the three-dimensional photonic structure has been examined by transmission electron tomography and computer modelling to reveal naturally occurring "chiral tetrahedral repeating units packed in a triclinic lattice", the cause of the iridescence.
Structural blue colour in Morpho cypris, a nymphalid
The white colour in pierids, such as Delias eucharis is a derivative of uric acid, an excretory product.
Wing coloration in certain lepidoptera permits camouflage as can be seen in the case of the geometrid moth Colostygia aqueata.
Scales play an important part in the natural history of Lepidoptera. Scales enable the development of vivid or indistinct patterns which help the organism protect itself by camouflage, mimicry and warning. Besides providing insulation, dark patterns on wings allow sunlight to be absorbed and are probably involved in thermoregulation. Bright and distinctive colour patterns in butterflies which are distasteful to predators help communicate their toxicity or inedibility, thus preventing predation. In Batesian mimicry, wing colour patterns help edible Lepidopterans mimic inedible models, while in Müllerian mimicry, inedible butterflies resemble each other to reduce the numbers of individuals sampled by inexperienced predators.
Scales may have evolved initially for providing insulation. Scales on the thorax and other parts of the body may contribute to maintaining the high body temperatures required during flight. The 'solid' scales of basal moths are however not as efficient as those of their more advanced relatives as the presence of a lumen adds air layers and increases the insulation value. Scales also help increase the lift to drag ratio in gliding flight.
For newly emerged adults of most myrmecophilous Lycaenidae, deciduous waxy scales provide some protection from predators as they emerge from the nest. In the case of the Moth butterfly (Liphyra brassolis), the caterpillars are unwelcome guests in nests of tree ants, feeding on ant larvae. The adults emerging from pupae are covered with soft, loose adhesive scales which rub off and stick on the ants as they make their way out of the nest after hatching.
Male Lepidoptera possess special scales, called androconia (singular – androconium), which have evolved as a result of sexual selection for the purposes of disseminating pheromones for attracting suitable mates. Androconia may be dispersed on the wings, body, or legs or occur in patches, referred to as "brands", "sex brands" or "stigmata" on the wings, usually in invaginations of the upper surface of the forewings, sometimes concealed by other scales. Androconia are also known to occur in the folds of wings. These brands sometimes consist of hairlike tufts which facilitate the diffusion of the pheromone. The role of androconia in the courtship of pierid and nymphalid butterflies, such as Pyronia tithonus, has been proven experimentally.:16–17
Successive close-ups of the scales of a Peacock wing
Photographic and light microscopic images Zoomed-out view of an Inachis io. Closeup of the scales of the same specimen. High magnification of the coloured scales (probably a different species). Electron microscopic images A patch of wing Scales close up A single scale Microstructure of a scale Magnification Approx. ×50 Approx. ×200 ×1000 ×5000
The abdomen or body is composed of nine segments. In the larva it ranges from segments 5 to 13. The eleventh segment of the larva holds a pair of anal claspers, which protude in some taxa and represent the genitalia.
Many families of moths have special organs to help detect bat echolocation. These organs are known as tympana (singular – typanum). The Pyraloidea and almost all Geometroidea have tympana located on the anterior sternite of the abdomen. The Noctuoidea also have tympana, but in their case, the tympana are located on the underside of the metathorax, the structure and position of which are unique and a taxonomic distinguishing feature of the superfamily.
The females of some moths have a scent-emitting organ located at the tip of the abdomen.
The genitalia are complex and provide the basis for species discrimination in most families and also in family identification. The genitalia arise from the tenth or most distal segment of the abdomen. Lepidoptera have some of the most complex genital structures of all insects, with a wide variety of complex spines, setae, scales and tufts in males, claspers of different shapes and different modifications of the ductus bursae in females, through which stored sperm is transferred within the female directly, or indirectly, to the vagina for fertilisation.
The arrangement of genitalia is important in courtship and mating as they prevent cross-specific mating and hybridisation. The uniqueness of a species' genitalia led to the use of the morphological study of genitalia as one of the most important keys in taxonomic identification of taxa below family level. With the advent of DNA analysis, the study of genitalia has now become just one of the techniques used in taxonomy.
There are three basic configurations of genitalia in the majority of the Lepidoptera based on how the arrangement in females of openings for copulation, fertilisation and egg-laying has evolved:
- Exoporian : Hepialidae and related families have an external groove that carries sperm from the copulatory opening (gonopore) to the (ovipore) and are termed Exoporian.
- Monotrysian : Primitive groups have a single genital aperture near the end of the abdomen through which both copulation and egg laying occur. This character is used to designate the Monotrysia.
- Ditrysian : The remaining groups have an internal duct that carry sperm and form the Ditrysia, with separate openings for copulation and egg-laying.
The genitalia of the male and female in any particular species are adapted to fit each other like a lock (male) and key (female). In males, the ninth abdominal segment is divided into a dorsal 'tegumen' and ventral 'viniculum'. They form a ring-like structure for the attachment of genital parts and a pair of lateral clasping organs (claspers or 'harpe'). The male has a median tubular organ (called the aedeagus) which is extended through an eversible sheath (or 'vesica') to inseminate the female. The males have paired sperm ducts in all lepidopterans; the paired testes are separate in basal taxa and fused in advanced forms.
While the layout of internal genital ducts and openings of the female genitalia depends upon the taxonomic group that insect belongs to, the internal female reproductive system of all lepidopterans consists of paired ovaries and accessory glands which produce the yolks and shells of the eggs. Female insects have a system of receptacles and ducts in which sperm is received, transported and stored. The oviducts of the female join together to form a common duct (called the 'oviductus communis') which leads to the vagina.
When copulation takes place, the male butterfly or moth places a capsule of sperm (spermatophore) in a receptacle of the female (called the corpus bursae). The sperm, when released from the capsule, swims directly into or via a small tube into a special seminal receptacle (spermatheca), where the sperm is stored until it is released into the vagina for fertilisation during egg laying, which may occur hours, days, or months after mating. The eggs pass through the ovipore. The ovipore may be at the end of a modified ovipositor or surrounded by a pair of broad setose anal papillae.
Butterflies of the Parnassinae (Family Papilionidae) and some Acraeini (Family Nymphalidae) add a post-copulatory plug, called the sphragis, to the abdomen of the female after copulation preventing her from mating again.
The males of many species of Papilionoidea are furnished with secondary sexual characteristics. These consist of scent-producing organs, brushes, and brands or pouches of specialised scales. These presumably meet the function of convincing the female that she is mating with a male of the correct species.
Tree species of hawkmoth have been recorded to emit ultrasound clicks by rubbing their genitalia; males produce by rubbing rigid scales on the exterior of the claspers while females produce sound by contracting their genitalia which causes rubbing of scales against the abdomen. The function of this noise-making is not clear and suggestions put forward include the jamming of bat echo-location, and, advertising that the bat's prey are prickly and excellent fliers.
Citheronia regalis with claspers closed
Citheronia regalis with claspers open
Close up of the hardened sphragis extruding 2 to 3 mm behind the abdomen of Parnassius
The fertilised egg matures and hatches to give a caterpillar. The caterpillar is the feeding stage of the Lepidopteran life-cycle. The caterpillar needs to be able to feed and to avoid being eaten and much of its morphology has evolved to facilitate these two functions.:108 After growth and ecdysis, the caterpillar enters into a sessile developmental stage called a pupa (or chrysalis) around which it may form a casing. The insect develops into the adult in the pupa stage; when ready the pupa hatches and the adult stage or imago of a butterfly or moth arises.
Like most insects, the Lepidoptera are oviparous or 'egg-layers'. Lepidopteran eggs, like those of other insects, are centrolecithal in that the eggs have a central yolk surrounded by cytoplasm. The yolk provides the liquid nourishment for the embryo caterpillar until it escapes from the shell. The cytoplasm is enclosed by the vitteline envelope and a proteinaceous membrane called the chorion protects the egg externally. The zygote nucleus is located posteriorly.
In some species of Lepidoptera, a waxy layer is present inside the chorion adjacent to the vitelline layer which is thought to have evolved to prevent desiccation. In insects, the chorion has a layer of air-pores in the otherwise solid material which provides very limited capability for respiratory function. In Lepidoptera, the chorion layer above this air pore layer is lamellar with successive sheets of protein arranged in a particular direction and stepped so as to form a helical arrangement.
The top of the egg is depressed and forms a small central cavity called micropyle through which the egg is fertilised. The micropyle is situated on top in eggs which are globular, conical, or cylindrical; in those eggs which are flattened or lenticular, the micropyle is located on the outer margin or rim.
The eggs of Lepidoptera are usually rounded and small (1 mm) though they may be as large as 4 mm in the case of Sphingidae and Saturniidae.:640 They are generally quite plain in colour, white, pale green, bluish-green, or brown. Butterfly and moth eggs come in various shapes; some are spherical, others hemispherical, conical, cylindrical or lenticular (lens-shaped). Some are barrel-shaped or pancake-shaped, while others are turban or cheese-shaped. They may be angled or depressed at both ends, ridged or ornamented, spotted or blemished.
The eggs are deposited singly, in small clusters, or in a mass, and invariably on or near the food source. Captive moths have been known to lay eggs in the cages they have been sequestered in. Egg size in the Lepidoptera is affected by a number of factors. Lepidoptera species which overwinter in the egg stage usually have larger eggs than the species that do not. Similarly, species feeding on woody plants in larval stage have larger eggs than those species feeding on herbaceous plants. Eggs laid by older females of a few butterfly species have been noted to be smaller in size than their younger counterparts. In the absence of adequate nutrition, the females of the corn-borer moth ( Ostrinia spp.) have been recorded to lay clutches with egg sizes below normal.
While escaping, the newly hatched larvae of many species sometimes eat the chorion to emerge. Alternatively, the egg shell may have a line of weakness around the cap which gives way allowing the larva to emerge. The egg shell and a small amount of yolk trapped in the amniotic membranes forms the first food for most lepidopteran larvae.
Eggs of Pioneer Anaphaeis aurota (family Pieridae)
Eggs of Crimson Rose Atrophaneura hector (family Papilionidae)
Egg of Mallow Skipper Carcharodus alceae (family Hesperiidae)
Egg of Large Copper Lycaena dispar (family Lycaenidae)
Upright eggs of ditrysian lepidopteran, Moon Moth Actias luna (family Saturniidae) laid in captivity on paper
Eggs of Pine Looper Moth Bupalus piniaria (family Geometridae)
Eggs of Lackey moth Malacosoma neustria (family Lasiocampidae)
Caterpillars, are "characteristic polypod larvae with cylindrical bodies, short thoracic legs and abdominal prolegs (pseudopods)". They have a toughened (sclerotised) head capsule, mandibles (mouthparts) for chewing, and a soft tubular, segmented body, that may have hair-like or other projections, 3 pairs of true legs, and additional prolegs (up to 5 pairs). The body consists of 13 segments, of which 3 are thoracic (T1, T2 and T3) and 10 are abdominal (A1 to A10).
All true caterpillars have an upside-down Y-shaped line that runs from the top of the head downward. In between the Y-shaped line lies the frontal triangle or frons. The clypeus, located below the frons, lies between the two antennae. The labrum is found below the clypeus. There is a small notch in the centre of the labrum with which the leaf edge engages when the caterpillar eats.
The larvae have silk glands which are located on the labium. These glands are modified salivary glands. They use these silk glands to make silk for cocoons and shelters. Located below the labrum are the mandibles. On each side of the head there are usually six stemmata just above the mandibles. These stemmata are arranged in a semicircle. Below the stemmata there is a small pair of antennae, one on each side.
The thorax bears three pairs of legs, one pair on each segment. The prothorax (T1) has a functional spiracle which is actually derived from the mesothorax (T2) while the metathorax has a reduced spiracle which is not externally open and lies beneath the cuticle.:114 The thoracic legs consist of coxa, trochanter, femur, tarsus and claw and are constant in form throughout the order. However they are reduced in the case of certain leaf-miners and elongated in certain Notodontidae. In Micropterigidae, the legs are three-segmented, as the coxa, trochanter and femur are fused.:114
Abdominal segments 3–6 and 10 each bear a pair of legs that are more fleshy. The thoracic legs are known as true legs and the abdominal legs are called prolegs. The true legs vary little in the Lepidoptera except for reduction in certain leaf-miners and elongation in the family Notodontidae.:114 The prolegs contain a number of small hooks on the tip, which are known as crochets. The families of Lepidoptera differ in the number and positioning of their prolegs. Some larvae such as inchworms (Geometridae) and loopers (Plusiinae) have five pairs of prolegs or less, while others like Lycaenidae and slug caterpillars (Limacodidae) lack prolegs altogether. In some leaf-mining caterpillars there are crochets present on the abdominal wall which are reduced prolegs, while other leaf-mining species lack the crochets entirely. The abdominal spiracles are located on each side of the body on the first eight abdominal segments.
Caterpillars have different types of projections; setae (hairs), spines, warts, tubercles, and horns. The hairs come in an assortment of colours and may be long or short; single, in clusters, or in tufts; thinner at the point or clubbed at the end. A spine may either be a chalaza (having a single point) or a scolus (having multiple points). The warts may either be small bumps or short projections on the body. The tubercles are fleshy body projections that are either short and bump-like or long and filament-like. They usually occur in pairs or in a cluster on one or more segments. The horns are short, fleshy, and are drawn to a point. They are usually found on the eighth abdominal segment.
A large number of species of families Saturniidae, Limacodidae and Megalopygidae have stinging caterpillars which have poisonous setae, also called urticating hairs, and in the case of Lonomia – a Brazilian saturniid genus – can kill a human due to its potent anticoagulant poison.:644 Caterpillars of many taxa that have sequestered toxic chemicals from host-plants or have sharp urticating hair or spines, display aposematic colouration and markings.
Caterpillars undergo ecdysis and have a number of larval instars, usually five but varying between species. The new cuticle is soft and allows the increase in size and development of the caterpillar before becoming hard and inelastic. In the last ecdysis, the old cuticle splits and curls up into a small ball at the posterior end of the pupa and is known as the larval exuvia.:31
The larvae of notodontid moths such as that of Stauropus fagi, have elongated thoracic legs.
Saddleback moth Acharia stimulea larvae display aposematic colouring in the shape of a saddle.
Underside of slug caterpillars of Phobetron pithecium (family Limacododiae) showing the absence of prolegs
Caterpillar of Common Aspen Leafminer Phyllocnistis populiella
Chrysalis or pupa
A cocoon is a casing spun of silk by many moth caterpillars, and numerous other holometabolous insect larvae as a protective covering for the pupa. Most Lepidoptera larvae will either make a cocoon and pupate inside them or will pupate in a cell under the ground, with the exception of butterflies and advanced moths such as noctuids, whose pupae are exposed. The pupae of moths are usually brown and smooth whereas butterfly pupae are often colourful and their shape varies greatly. In butterflies, the exposed pupa is often referred to as a chrysalis, derived from the Greek term "chrysalis": χρυσός (chrysós) for gold, referring to the golden colour of some pupae.
The caterpillars of many butterflies attach themselves by a button of silk to the underside of a branch or stone or other projecting surface. They remain attached to the silk pad by a hook-like process called a cremaster. Most chrysalids hang head downward, but in the families Papilionidae, Pieridae, and Lycaenidae, the chrysalis is held in a more upright position by a silk girdle around the middle of the chrysalis.
The pupae of most Lepidoptera are obtect, with appendages fused or glued to the body, while the rest have exarate pupae, having the antennae, legs, and wings free and not glued to the body.
During the pupal stage, the morphology of the adult is developed through elaboration from larval structures.:151 The general aspect of the adult is visible before the outer surface hardens – the head, resting on the thorax, the eyes, antennae (brought forward over the head), the wings brought over the thorax and the six legs between the wings and the abdomen. Among the features discernible in the head region of a pupa are sclerites, sutures, pilifers, mandibles, eye-pieces, antennae, palpi and the maxillae. The pupal thorax displays the three thoracic segments, legs, wings, tegulae, alar furrows and axillary tubercles. The pupal abdomen exhibits the ten segments, spines, setae, scars of larval prolegs and tubercles, anal and genital openings, as well as spiracles. The pupa of borers display the flange-plates while those of specialised Lepidoptera exhibit the cremaster.:23–29
While the pupa is generally stationary and immobile, those of the primitive moth families Micropterigidae, Agathiphagidae and Heterobathmiidae have fully functional mandibles.:131 These serve principally to allow the adult to escape from the cocoon.:34 Besides this, all appendages and the body are separate from the pupal skin and enjoy a degree of independent motion. All other superfamilies of the Lepidoptera are more specialised, have non-functional mandibles, appendages and body attached to the pupal skin, and lose a degree of independent movement.:20
The pupae of some moths are able to wriggle their abdomen. The three caudal segments of the pupal abdomen (segments 8–10) are fixed; the other segments are movable to some degree. While the more evolved Lepidoptera can wriggle only the last two-three segments at the end of the abdomen, more basal taxa such as the Micropterigidae can wriggle the remaining seven segments of the abdomen; this presumably helps them to protrude the anterior end from the pupal case before eclosion.:28 The pupae of Hepialidae are able to move back and forth in the larval tunnel by wriggling, aided by projections on the back in addition to spines. Abdominal wriggling is considered to be of startle value and discouraging to predators. In the case of a few hawk moths, such as Theretra latreillii, the wriggling of the abdomens is accompanied by a rattling or clicking sound which adds to the startle effect.
Papilionid chrysalids are typically attached to a substrate by the cremaster and with the head up held by a silk girdle.
Suspended golden-coloured nymphalid chrysalis of Euploea core.
The specialised pupa of a sphingid moth (Agrius convolvuli) can wriggle its abdomen making a clicking sound, which can have a startle effect.
Defense and predation
Lepidopterans are soft bodied, fragile and almost defenseless while the immature stages move slowly or are immobile, hence all stages are exposed to predation by birds, small mammals, lizards, amphibians, invertebrate predators (notably parasitoid and parasitic wasps and flies) as well as fungi and bacteria. To combat this, Lepidoptera have developed a number of strategies for defense and protection which include camouflage, aposematism, mimicry, and the development of threat patterns and displays.
Camouflage is an important defense strategy enabled by changes in body shape, colour and markings. Some lepidopterans blend with the surroundings, making them difficult to be spotted by predators. Caterpillars can be shades of green that match their host plant. Others resemble inedible objects, such as twigs or leaves. The larvae of some species, such as the Common Mormon and the Western Tiger Swallowtail look like bird droppings.
Some species of Lepidoptera sequester or manufacture toxins which are stored in their body tissue, rendering them poisonous to predators; examples include the Monarch butterfly in the Americas and Atrophaneura species in Asia. Predators that eat poisonous lepidopterans may become sick and vomit violently, and so learn to avoid those species. A predator who has previously eaten a poisonous lepidopteran may avoid other species with similar markings in the future, thus saving many other species as well. Toxic butterflies and larvae tend to develop bright colours and striking patterns as an indicator to predators about their toxicity. This phenomenon is known as aposematism.
Aposematism has also led to the development of mimicry complexes of Batesian mimicry, where edible species mimic aposematic taxa, and Müllerian mimicry, where inedible species, often of related taxa, have evolved to resemble each other, so as to benefit from reduced sampling rates by predators during learning. Similarly, adult Sesiidae species (also known as clearwing moths) have a general appearance that is sufficiently similar to a wasp or hornet to make it likely that the moths gain a reduction in predation by Batesian mimicry.
Eyespots are a type of automimicry used by some lepidopterans. In butterflies, the spots are composed of concentric rings of scales of different colours. The proposed role of the eyespots is to deflect predators' attention. Their resemblance to eyes provokes the predator's instinct to attack these wing patterns. The role of filamentous tails in Lycaenidae has been suggested as confusing predators as to the real location of the head, giving them a better chance of escaping alive and relatively unscathed.
Some caterpillars, especially members of Papilionidae, contain an osmeterium, a Y-shaped protrusible gland found in the prothoracic segment of the larvae. When threatened, the caterpillar emits unpleasant smells from the organ to ward off the predators.
- Differences between butterflies and moths
- Glossary of Lepidopteran terms
- Insect morphology
- Morphology (biology)
- Kristensen, Niels P.; Scoble, M. J. & Karsholt, Ole (2007). Z.-Q. Zhang & W. A. Shear, ed. "Linnaeus Tercentenary: Progress in Invertebrate Taxonomy" (PDF). Zootaxa 1668: 699–747. ISBN 978-0-12-690647-9.
Chapter : Lepidoptera phylogeny and systematics: the state of inventorying moth and butterfly diversity.
- Dugdale, J. S. (1996). "Natural history and identification of litter-feeding Lepidoptera larvae (Insecta) in beech forests, Orongorongo Valley, New Zealand, with especial reference to the diet of mice (Mus musculus)" (PDF). Journal of the Royal Society of New Zealand 26 (4): 251–274. doi:10.1080/03014223.1996.9517513.
- Scoble, M. J. (1995). "Mouthparts". The Lepidoptera: Form, Function and Diversity. Oxford University Press. pp. 6–19. ISBN 978-0-19-854952-9.
- Borror, Donald J.; Triplehorn, Charles A.; Johnson, Norman F. (1989). Introduction to the Study of Insects (6, illustrated ed.). Saunders College Publications. ISBN 978-0-03-025397-3. Retrieved 16 November 2010. (No preview.)
- Scoble (1995). Section Sensation, (pp. 26–38).
- Hoskins, Adrian. "Butterfly Anatomy Head (& other pages)". www.learnaboutbutterflies.com. Retrieved 15 November 2010.
- Powell, Jerry A. (2009). "Lepidoptera". In Resh, Vincent H.; Cardé, Ring T. Encyclopedia of Insects (2nd ed.). Academic Press. pp. 661–663. ISBN 978-0-12-374144-8.
- Scoble (1995). Section Scales, (pp. 63–66).
- Mallet, Jim (12 June 2007). "Details about the Lepidoptera and Butterfly Taxome Projects". The Lepidoptera Taxome Project. University College London. Retrieved 14 November 2010.
- Gillot, Cedric (1995). "Butterflies and moths". Entomology (2nd ed.). ISBN 978-0-306-44967-3.
- Evans, W. H. (1932). "Introduction". Identification of Indian Butterflies (2nd ed.). Mumbai: Bombay Natural History Society. pp. 1–35.
- "lepidopteran". Encyclopædia Britannica Online. Encyclopædia Britannica. 2011. Retrieved 12 February 2011.
- Heppner, J. B. (2008). "Butterflies and moths". In Capinera, John L. Encyclopedia of Entomology. Gale virtual reference library 4 (2nd ed.). Springer Reference. p. 4345. ISBN 978-1-4020-6242-1.
- Mosher, Edna (2009) . A Classification of the Lepidoptera Based on Characters of the Pupa (reprint ed.). BiblioBazaar, LLC. ISBN 978-1-110-02244-1.
- Kristensen, Niels P. (2003). Lepidoptera, Moths and Butterflies: Morphology, Physiology and Development, Volume 2. Volume 4, Part 36 of Handbuch der Zoologie. Walter de Gruyter. ISBN 978-3-11-016210-3.
- Scoble (1995). Section The Adult Head – Feeding and Sensation, (pp. 4–22).
- Holland, W. J. (1903). "Introduction". The Moth Book (PDF). London: Hutchinson and Co. ISBN 0-665-75744-1.
- Merlin, Christine; Gegear, Robert J.; Reppert, Steven M. (2009). "Antennal circadian clocks coordinate sun compass orientation in migratory Monarch butterflies". Science 325 (5948): 1700–1704. doi:10.1126/science.1176221. PMC 2754321. PMID 19779201.
- Triplehorn, Charles A.; Johnson, Norman F. (2005). Borror and Delong's Introduction to the Study of Insects. Belmont, California: Thomson Brooks/Cole. ISBN 978-0-03-096835-8.
- Agosta, Salvatore J.; Janzen, Daniel H. (2004). "Body size distributions of large Costa Rican dry forest moths and the underlying relationship between plant and pollinator morphology". Oikos 108 (1): 183–193. doi:10.1111/j.0030-1299.2005.13504.x.
- Kunte, Krushnamegh (2007). "Allometry and functional constraints on proboscis lengths in butterflies". Functional Ecology 21: 982–987. doi:10.1111/j.1365-2435.2007.01299.x. Retrieved 8 February 2013.
- Krenn, H. W.; Penz, C. M. (1 October 1998). "Mouthparts of Heliconius butterflies (Lepidoptera: Nymphalidae): a search for anatomical adaptations to pollen-feeding behavior". International Journal of Insect Morphology and Embryology 27 (4): 301–309. doi:10.1016/S0020-7322(98)00022-1.
- Mackenzie, Debora (20 December 2006). "Moths drink the tears of sleeping birds". New Scientist. Reed Business Information. Retrieved 10 February 2012.
- Hilgartner, Roland; Raoilison, Mamisolo; Büttiker, Willhelm; Lees, David C.; Krenn, Harald W. (22 April 2007). "Malagasy birds as hosts for eye-frequenting moths". Biology Letters (The Royal Society) 3 (2): 117–120. doi:10.1098/rsbl.2006.0581. Retrieved 10 February 2012.
- Scoble (1995) Ch. 3 : The adult thorax – a study in function & effect (pp. 39–40).
- Scoble, M. J.; Aiello, Annette (1990). "Moth-like butterflies (Hedylidae: Lepidoptera):a summary, with comments on the egg" (PDF). Journal of Natural History 24 (1): 159–164. doi:10.1516/XX46-6402-G214-KM84.
- Chapman, R. F. (1998). "Thorax". The insects: structure and function (4th ed.). Cambridge University Press. p. 45. ISBN 978-0-521-57890-5..
- Robbins, Robert K. 1981 The "False Head" Hypothesis: Predation and Wing Pattern Variation of Lycaenid Butterflies" American Naturalist 118(5) 770-775
- Scoble (1995). Section "Wings". Pg 55.
- Dudley, Robert (2002). The biomechanics of insect flight: form, function, evolution. Princeton University Press. ISBN 978-0-691-09491-5.
- Stocks, Ian (2008). "Wing coupling". In Capinera, John L. Encyclopedia of Entomology. Gale virtual reference library 4 (2nd ed.). Springer Reference. p. 4266. ISBN 978-1-4020-6242-1.
- Scoble (1995). Section Wing coupling, (pp. 56–60).
- Gorb, Stanislav (2001). "Inter-locking of body parts". Attachment devices of insect cuticle. Springer. p. 305. ISBN 978-0-7923-7153-3.
- Harper, Douglas. "Lepidoptera". The Online Etymology Dictionary. Retrieved 21 November 2010. from "Lepidoptera" on Dictionary.com website.
- Downey, J.C.; Allyn, A.C. (1975). "Wing-scale morphology and nomenclature". Bull. Allyn Mus. 31: 1–32.
- Chapman (1988). Section Wings and flight (p. 190).
- Gullan, P. J.; Cranston, P. S. (2005). The Insects: an Outline of Entomology (3rd ed.). Wiley-Blackwell. ISBN 978-1-4051-1113-3.
- Mason, C. W. (1927). "Structural colours in Insects - II". Journal of Physical Chemistry 31 (3): 321–354. doi:10.1021/j150273a001.
- Vukusic, P. (2006). "Structural colour in Lepidoptera" (PDF). Current Biology 16 (16): R621–R623. doi:10.1016/j.cub.2006.07.040. PMID 16920604.
- Prum, R. O.; Quinn, T.; Torres, R. H. (2006). "Anatomically diverse butterfly scales all produce structural colours by coherent scattering.". Journal of Experimental Biology 209 (4): 748–765. doi:10.1242/jeb.02051. PMID 16449568.
- Kinoshita, Shu-ichi (2008). Structural Colors in the Realm of Nature. World Scientific. pp. 52–53. ISBN 978-981-270-783-3.
- Michielsen, K.; Stavenga, D. G. (2008). "Gyroid cuticular structures in butterfly wing scales: biological photonic crystals". Journal of the Royal Society Interface 5 (18): 85–94. doi:10.1098/rsif.2007.1065. PMC 2709202. PMID 17567555.
- Poladian, Leon; Wickham, Shelley; Kwan Lee & Large, Maryanne C. J. (2009). "Iridescence from photonic crystals and its suppression in butterfly scales". Journal of the Royal Society Interface 6 (Suppl. 2): S233–S242. doi:10.1098/rsif.2008.0353.focus. PMC 2706480. PMID 18980932.
- Argyros, A.; Manos, S.; Large, M. C. J.; McKenzie, D. R.; Cox, G. C., and Dwarte, D. M. (2002). "Electron tomography and computer visualisation of a three-dimensional 'photonic' crystal in a butterfly wing-scale". Micron 33 (5): 483–487. doi:10.1016/S0968-4328(01)00044-0. PMID 11976036.
- Ghiradella, Helen (1991). "Light and color on the wing: structural colors in butterflies and moths". Applied Optics 30 (24): 3492–3500. doi:10.1364/AO.30.003492. PMID 20706416.
- Wynter-Blyth, M. A. (1957). Butterflies of the Indian Region (Reprint of 2009 by Today & Tomorrows Publishers, New Delhi ed.). Mumbai, India: Bombay Natural History Society. ISBN 978-81-7019-232-9.
- "Androconium". Encyclopædia Britannica Online. Encyclopædia Britannica. Retrieved 30 October 2010.
- Hall, Jason P. W.; Harvey, Donald J. (2002). "A survey of androconial organs in the Riodinidae (Lepidoptera)" (PDF). Zoological Journal of the Linnean Society 136 (2): 171–197. doi:10.1046/j.1096-3642.2002.00003.x.
- Comstock, John Henry (2008) . An Introduction to Entomology. Read Books, Originally published by Comstock Publishing Company. ISBN 978-1-4097-2903-7.
- Scoble (2005). Chapter Higher Ditrysia, pg 328.
- "Lepidopteran". Encyclopædia Britannica Online. Encyclopædia Britannica, London. Retrieved 16 November 2010.
- Scoble (1995). Section Adult abdomen, (pp. 98–102).
- Watson, Traci (3 July 2013). "Hawkmoths zap bats with sonic blasts from their genitals". http://www.nature.com/. Nature Publishing Group. Retrieved 5 July 2013.
- Scoble (1995). Chapter Immature stages, (pp. 104–133).
- Nation, James L. (2002). Insect Physiology and Biochemistry. CRC Press. ISBN 978-0-8493-1181-9.
- Chapman (1998). Section The egg and embryology (pp. 325–362).
- Holland, W. J. (1898). "Introduction". The Butterfly Book (PDF). London: Hutchinson and Co. ISBN 0-665-13041-4.
- P. J. Gullan & P. S. Cranston (2010). "Life-history patterns and phases". The Insects: an Outline of Entomology (4th ed.). Wiley-Blackwell. pp. 156–164. ISBN 978-1-4443-3036-6.
- Wagner, David L. (2005). Caterpillars of Eastern North America. Princeton University Press. ISBN 978-0-691-12144-4.
- Miller, Jeffrey C. (3 August 2006). "Caterpillar Morphology". Caterpillars of the Pacific Northwest Forests and Woodlands. Northern Prairie Wildlife Research Center. Retrieved 16 November 2010.
- MacAuslane, Heather J. (2008). "Aposematism". In Capinera, John L. Encyclopedia of Entomology. Gale virtual reference library 4 (2nd ed.). Springer Reference. ISBN 978-1-4020-6242-1.
- Common, I. F. B. (1990). Moths of Australia. Brill Publishers. ISBN 978-90-04-09227-3.
- Harper, Douglas. "Chrysalis". Online Etymology Dictionary. Dictionary.com. Retrieved 16 November 2010.
- Stehr, Frederick W. (2009). "Pupa and puparium". In Resh, Vincent H.; Cardé, Ring T. Encyclopedia of Insects (2nd ed.). Academic Press. pp. 970–973. ISBN 978-0-12-374144-8.
- Figuier, Louis (1868). The insect world: being a popular account of the orders of insects, together with a description of the habits and economy of some of the most interesting species. New York: D. Appleton & Co.
- Sourakov, Andrei. (2008). Pupal Mating in Zebra Longwing (Heliconius Charithonia): Photographic Evidence. News of the Lepidopterists' Society 50(1):26–32.
- "Caterpillar and Butterfly Defense Mechanisms". EnchantedLearning.com. Retrieved 7 December 2009.
- Latimer, Jonathan P.; Karen Stray Nolting (2000). Butterflies. Houghton Mifflin Harcourt. ISBN 0-395-97944-7.
- Kricher, John (1999). "6". A Neotropical Companion. Princeton University Press. pp. 157–158. ISBN 978-0-691-00974-2.
- Santos, J. C.; Cannatella, D. C. (2003). "Multiple, recurring origins of aposematism and diet specialization in poison frogs" (PDF). Proceedings of the National Academy of Sciences 100 (22): 12792–12797. doi:10.1073/pnas.2133521100. PMC 240697. PMID 14555763.
- Insects and Spiders of the World, 10. Marshall Cavendish Corporation. Marshall Cavendish. January 2003. pp. 292–293. ISBN 0-7614-7344-0.
- Carroll, Sean (2005). Endless forms most beautiful: the new science of evo devo and the making of the animal kingdom. W. W. Norton & Co. pp. 205–210. ISBN 0-393-06016-0.
- Heffernan, Emily (2004). Symbiotic Relationship Between Anthene emolus (Lycaenidae) and Oecophylla smaragdina (Formicidae): an Obligate Mutualism in the Malaysian Rainforest (PDF) (M.Sc. thesis). University Of Florida.
- "Osmeterium". Merriam-Webster, Incorporated. Retrieved December 9, 2009.
- Hadley, Debbie. "Osmeterium". About.com Guide. Retrieved December 9, 2009.
- SEM Image of butterfly scale and its pedicel (third from top).
- Exquisite castaways – photo-feature on Lepidopteran eggs by National Geographic.
- Uncommon vision – photo-feature on moths by National Geographic.
|
Battle of Stalingrad
The Battle of Stalingrad is one of the most famous battles of World War II . The annihilation of the German 6th Army and allied troops in the winter of 1942 / beginning of 1943 is considered the psychological turning point of the German-Soviet War that the German Reich started in June 1941 .
The industrial site of Stalingrad was originally an operational target of German warfare and was intended to serve as the starting point for the actual advance into the Caucasus . After the German attack on the city in the late summer of 1942 a Soviet were consecutive counter-offensive in November 1942 up to 300,000 soldiers of the Wehrmacht and its allies by the Red Army encircled . Hitler decided that the German troops should hold out and wait for a relief offensive , which failed in December 1942. Although the situation of the insufficiently supplied soldiers in the cauldron was hopeless, Hitler and the military leadership insisted that the loss-making fighting should be continued. Most of the soldiers stopped fighting at the end of January / beginning of February 1943, partly on orders, partly due to a lack of material and food, and were taken prisoner of war without an official surrender . Around 10,000 dispersed soldiers who were hiding in cellars and the sewer system continued their resistance until the beginning of March 1943. Of the around 110,000 soldiers of the Wehrmacht and allied troops who were taken prisoner, only around 6,000 returned home. Over 700,000 people were killed in the fighting in Stalingrad, most of them soldiers of the Red Army.
Although the German Wehrmacht suffered major operational defeats during the Second World War, Stalingrad gained special importance as a German and Soviet memorial site . The battle was instrumentalized by Nazi propaganda even during the war and, more than any other battle of the Second World War, is still anchored in the collective memory today .
After the attack by the German Reich on the Soviet Union on June 22, 1941 and the counter-offensive by the Red Army in the winter of the same year, a new offensive was planned for the summer of 1942 under the code name Fall Blau with the aim of capturing the Soviet oil fields in the Caucasus .
The city of Stalingrad has been classified as an important operational destination on the one hand because of its industrial and geographical importance and on the other hand because of its symbolic value:
- Stalingrad was of great strategic importance for the Soviet Union , as the Volga is an important waterway . In addition, the city of Stalingrad was named after Stalin , which is why this attack also served to demoralize the Soviet armed forces. The city stretched 40.2 kilometers north-south along the west bank of the Volga, but was only 6.4 to 8 kilometers wide at its widest point. The Volga, which is 1.6 kilometers wide at this point, protected the city from being enclosed. The river was part of an important supply route for military equipment, on the basis of the Lend-Lease Act in the United States over the Persian Corridor and the Caspian Sea were transported to Central Russia. German plans aimed at a renewed advance on Moscow were therefore discarded because Hitler considered the Caucasian oil fields to be more important for further warfare. The conquest of Stalingrad was supposed to prevent this route of transport and secure a further advance of the Wehrmacht into the Caucasus with its oil deposits near Maikop , Grozny and Baku .
- The symbolic meaning of the name Stalingrad for both Stalin and Hitler was an additional incentive for both warring parties to achieve a military victory. Stalin defended this city during the Russian Civil War as Army Commissioner of the Southern Front and, among other things, consolidated the power of the WKP (B) with mass shootings of alleged saboteurs . In 1925 the city was renamed Stalingrad by Tsaritsyn .
According to calculations by Stalin's high command , in 1942, despite one million fallen Red Army soldiers and over three million prisoners of war in Germany, 16 million Soviet citizens of armed age were still facing the German armies. The arms industry relocated behind the Urals produced 4,500 tanks, 3,000 combat aircraft, 14,000 artillery pieces and 50,000 grenade launchers by 1942. On the German side, a million soldiers were killed, wounded or missing; of the tanks involved in the attack only one in ten was still functional.
However, Hitler assumed that "the enemy had largely used up the masses of his reserves in the first winter of the war". From this misjudgment he ordered to attack Stalingrad and the Caucasus at the same time . This split up the limited German offensive forces and led to a spatial expansion and thinning of the front . The success of the plan depended on the fact that the vast flank of Army Group B along the Don could be defended by the armies of allied states, while German armies were to conduct the actual offensive operations. The main attack force was the 200,000 to 250,000 strong German 6th Army under General Friedrich Paulus . It received support from the 4th Panzer Army under Colonel General Hermann Hoth with various subordinate Romanian units.
German advance on Stalingrad
Due to the German advance towards Stalingrad and the Volga, the Stalingrad Front was formed on July 12, 1942 on the orders of the Soviet High Command from the command of the dissolved Southwest Front . Marshal Tymoshenko and, from July 22nd, Lieutenant General WN Gordow were in command . It consisted of the 62nd , 63rd and 64th Armies and was reinforced by the 51st , 66th and 24th Army , the 1st and 4th Panzer Army and the 1st Guard Army by the end of August .
Strong Soviet resistance in the Donbogen and a lack of fuel delayed the German approach by several weeks. On July 17, 1942, the heads of the German 6th Army met the vanguard of the Soviet 62nd and 64th Armies, which initially received support from the 4th Panzer Army and later from the 1st Panzer Army. The strong frontal resistance of the Soviet troops during the Kesselschlacht near Kalatsch (July 25th to August 11th) forced the German Wehrmacht to deploy its troops more widely. Due to the increasing breadth of the battlefield, the Stalingrad Front was divided on August 7th by order of the Stavka and a south-eastern front was also formed, the command of which was transferred to Colonel General Jerjomenko . In terms of numbers, the Soviet high command for the defense of Stalingrad could fall back on about 1,000,500 men, who had 13,541 guns , 894 tanks and 1,115 aircraft at their disposal.
Only on August 21, 1942 could the German 6th Army with the LI. Army Corps (General of the Artillery von Seydlitz-Kurzbach ) cross the Don at Kalatsch and start the advance to Stalingrad. The German troops were opposed by the 62nd Army under Lieutenant General AI Lopatin , the 63rd Army under Lieutenant General W. I. Kuznetsov and the 64th Army under Lieutenant General VI Chuikov . It should be taken into account that the Soviet army at that time was more like a German corps in terms of personnel and material due to different organizational structures compared to a German one . From this it follows that at the beginning of the battle both sides were roughly equally strong - if one assumes that a German army consisted of four to five army corps, depending on the situation, equipment and mission.
Advance detachments of the German 16th Panzer Division reached the Volga north of Stalingrad near Rynok on August 23 at 6 p.m. , but soon had to defend against strong Soviet counter-attacks from the north. On the same day, a massive German air raid with 600 planes killed thousands of civilians in Stalingrad, who were not to be evacuated on Stalin's orders. The German Luftflotte 4 dropped a total of approximately one million bombs with a total weight of 100,000 tons on the city.
For a long time, the Stawka prevented the population from leaving the city, which was overcrowded with refugees, as Stalin was of the opinion that staying there would increase the morale of the fighting soldiers. Women and children had to help expand the defensive positions, dig anti- tank trenches and, in some cases, even intervene in combat. In August 1942 there were around 600,000 people in the city. Over 40,000 civilians were killed in air raids in the first days of the battle. It was not until the end of August that residents began to be resettled across the Volga. But with such a large population it was too late for a complete evacuation of Stalingrad. Around 75,000 civilians had to stay in the destroyed city. Neither the Red Army nor the Germans showed any consideration for the civilian population. Numerous residents had to live in holes in the ground. Many froze to death in the winter of 1942/43; others starved to death because there was no more food.
On August 23, 1942, when German advance commandos broke through to the Volga north of Stalingrad, the Soviet high command, on instructions from Stalin, declared the city to be under siege . From that day on, the responsibility for the immediate defense of the city lay with Colonel-General Andrei Yeryomenko, who after Gordov's dismissal had taken over the organization and management of the Soviet Stalingrad Front on Stalin's personal instructions . Nikita Khrushchev was at his side as political commissar and Major General IS Varennikov as chief of staff . Order No. 227 , issued by Stalin on July 28, 1942, with the slogan “Don't step back!” Led to the formation of firing squads and punitive battalions for Red Army soldiers who were accused of lack of combat readiness or cowardice.
Course of the battle
The course of the battle is divided into three major phases.
- 1st phase: From late summer 1942 the 6th Army tries to conquer the city of Stalingrad. After conquering up to 90 percent with high losses on both sides, the situation turns in favor of the Red Army.
- 2nd phase: The Red Army troops encircle the 6th Army in Operation Uranus. The poorly equipped Romanian units deployed to secure their flanks could not withstand the Soviet offensive.
- 3rd phase: After Hitler's prohibition of attempting an escape , the 6th Army shelters itself and waits for outside help. In Operation Wintergewitter , the Germans attempted to reach the basin, which ultimately failed due to the resistance of the Red Army and the subsequent collapse of Italian units on the central Don. After heavy losses through fighting, cold and hunger, the remnants of the 6th Army surrender in February 1943.
First phase of attack by the 6th Army
On September 12, 1942, Hitler asked Paulus to take Stalingrad. "The Russians", so Hitler, are "at the end of their strength". After the state of siege was imposed, Lieutenant General AI Lopatin was temporarily replaced as Commander in Chief of the 62nd Army by Chief of Staff NI Krylov and replaced by Lieutenant General Vasily Chuikov . General Lopatin had doubted that he could hold the city against the German troops according to Stalin's orders. The leadership of the 64th Army, which Chuikov held until August 4th, has already been transferred to General MS Shumilov .
On September 13, the major German attack began with bombing by dive bombers and massive fire from field artillery and mortars on the inner defensive belt of Stalingrad. The 295th Infantry Division took action against Mamayev Hill and the 71st Infantry Division against Stalingrad Central Station and the central ferry terminal in the city center. The German XIV Panzer Corps (16th Panzer, 60th and 3rd (motorized) divisions) deployed in the north of the city had the task of defending against the multiple attacks of the Soviet 1st Division at the eastern end of the Kotluban corridor between Don and Volga . Guard Army , 24th and 66th Army (Lieutenant General AS Schadow ). The very next day, the commanding general von Wietersheim was deposed by Hitler because he had proposed that the loss-making attacks on Stalingrad be stopped altogether. The new commander, Major General Hans-Valentin Hube , ordered a new attack on the Orlowka promontory on September 27, which quickly collapsed, so that the 94th and 389th Infantry Divisions had to be brought in as reinforcements. Opposite the Soviet 21st Army (Lieutenant General AI Danilow), the VIII Army Corps (General of the Artillery Heitz ) held the Don section between Shishikin and Kotluban with the 76th and 113th Infantry Divisions . The further the German LI. Army corps advanced into the inner city, the more violent the Soviet resistance was.
The Soviet defenders turned every foxhole, house and intersection into a fortress. On September 14th, the 13th Guards Rifle Division under Major General Rodimzew arrived as reinforcement to stop the further German advance. On September 21, the 284th Rifle Division (Colonel Batjuk) reached the western Volga and secured between the “Red October” steelworks and the Mamayev Hill. On September 27, the hard-fought Mamayev Hill on the northwest side remained in German possession, only the eastern slope was held by the 284th Rifle Division. On September 29, the Orlovka front was cut off, and the enclosed Soviet units fought until they were destroyed. At the end of September 1942, the 6th Army High Command shifted the attack focus to the industrial complexes in the north of the city. The 284th Rifle Division replaced the 13th Guards Rifle Division on Mamayev Hill . The battles over the two train stations, the grain silo, the Pavlov House , the Mamayev Hill (in German referred to as Höhe 102 , also called Mamai Hill) and the large factories in the north with the “Red October” steelworks, the gun factory, were particularly fierce “Barricades” and the “Dzerzhinsky” tractor factory .
The German units only succeeded in bringing the almost completely destroyed city almost completely under their control as part of Operation Hubertus (November 9th to 12th), which Hitler in his speech in the Löwenbräukeller on November 8, 1942 as a great victory was celebrated. The 62nd Army under Lieutenant General Tschuikow only held a narrow strip a few hundred meters wide on the Volga and small parts in the north of the city.
Second phase: Operation Uranus - encirclement of the 6th Army
By begun on the morning of 19 November 1942 "Operation Uranus" the troops of the Armed Forces of the Soviet armed forces were Don Front under Rokossovsky and the Southwestern Front under Vatutin in the West at the Romanian 3rd Army , as well as the Stalingrad Front under Andrei Ivanovich Eremenko the broke through in the southeast with the Romanian 4th Army , trapped within five days.
To this end, the 5th Panzer Army (General Romanenko ) from the Don bridgehead of Serafimowitsch and the 21st Army from the Kletskaya bridgehead (from October 14 under Lieutenant General Tschistjakow ) each made a breakthrough to the south. The Romanian 3rd Army (General Petre Dumitrescu ) facing them could not hold out for long because they were supposed to secure an overstretched flank and were insufficiently equipped for it. For the defense against the Soviet tanks, for example, these units had mostly 3.7 cm PaKs drawn by horse-drawn vehicles , which were practically ineffective against the Soviet T-34 tanks. The advance of the Red Army advanced quickly, also because the weather was bad at the time of "Operation Uranus" and the German Air Force was unable to intervene. When the weather improved, the Luftwaffe found itself unusually on the defensive, as the Lavochkin La-5 was used in larger numbers for the first time in this battle , an aircraft type with comparable performance to the German Fw 190 and thus capable of its own Effective cover for attack aircraft.
Behind the Romanian 3rd Army was the XXXXVIII. Panzer Corps , consisting of the 22nd German and 1st Romanian armored divisions. On Hitler's orders, it was thrown against the Soviet troops to stabilize the situation. The Panzer Corps, primarily equipped with completely outdated Czech Panzerkampfwagen 38 (t) , was in stables and barns in readiness. Enormous mice in the straw had eaten their way through the panels and electrical cables of the vehicles, leaving only around 30 tanks ready for action, which, due to their small numbers and their rather low combat strength, could not stop the attack by the Red Army. The commander of that tank corps, Ferdinand Heim , subsequently served as a scapegoat , was expelled from the Wehrmacht and was not reassigned to a command in Boulogne until 1944 .
On November 20, the attack by the 57th Army (General Tolbuchin ) of the Stalingrad Front (Jerjomenko) began in southern Stalingrad . The Soviet 13th Panzer Corps (Major General Tanaschishin) broke through the northern wing of the Romanian 4th Army near Krasnoarmeisk . The Romanian 20th Infantry Division under General Tataranu was pushed northwards to the German IV Army Corps to Beketowka and later surrounded by this corps and the 6th Army, which were still subordinate to the German 4th Panzer Army. The second attack wedge, the 4th Mechanized Corps (Major General WT Wolski) of the 51st Army (Major General NI Trufanow ) broke through the front of the Romanian VI. Corps at the Tundutowo train station and could not be stopped by the German 29th Infantry Division . The breakthrough in the Romanian 4th Army and the German 4th Panzer Army enabled the Soviet armored spearheads to perform a double pincer movement that met on November 23 at Kalatsch am Don and thus closed the ring around the German 6th Army encircled in the Stalingrad area.
The Wehrmacht was now in a dangerous dilemma : in the event of a defeat in Stalingrad, the Red Army could have broken through to Rostov and the Black Sea and thus cut off Army Group A as well as Army Group A - which would mean the loss of the entire southern wing of the German Eastern Front would have meant. Otherwise, however, a withdrawal from the Pre-Caucasus would have meant that the Caucasian oil fields would move into an unreachable distance and a planned advance towards Iran or India would have become completely illusory. However, Hitler did not want to admit this to himself and therefore delayed the withdrawal order for Army Group A. Only when the failure of the relief attempt became apparent that the 6th Army was defeated, the withdrawal of Army Group A was initiated on December 28, 1942, which, due to the late decision, turned into a loss-making escape over hundreds of kilometers, often all difficult ones Weapons, vehicles and tanks had to be abandoned simply because of the worsening permanent lack of fuel.
Third phase: conquering the cauldron
Since November 22nd, the 6th Army was completely surrounded by Soviet troops. On that day, the units of the 4th Panzer Army (IV Army Corps) and the Romanians (two divisions), which had also been pushed into the pocket, were subordinated to her. Paul and his staff planned to first stabilize the fronts and then break out to the south. At that time, however, there was already a lack of the necessary equipment for such a company.
On November 24, Hitler finally decided to supply the boiler from the air after Reichsmarschall Hermann Göring had assured him that the Air Force was able to fly in the required minimum requirement of 500 tons of supply daily. Allegedly, both Goering and Hitler were informed by the General Staffs of the Army and the Air Force that this was not possible. The highest level of supply was reached on December 19, 1942 with 289 tons, but on some days no supply flights could be carried out due to the bad weather. From November 25, 1942 to February 2, 1943, instead of the promised 500 tons, only 94 tons could be flown in every day.
On November 24th, the soldiers' rations were halved and the bread allocation was set at 300 grams a day and subsequently reduced to 100 grams, towards the end of which it was only 60 grams per man. This means only three slices of bread a day, which is never what a combatant soldier would need. The troop starved to death for weeks.
The supply from the air, for which the primary VIII. Air Corps of the Air Force 4 was in charge, broke down further than in the context of the Middle Don-operation airfields Tatsinskaya (24 December 1942) and Morosovskaya (5 January 1943) west of the boiler as the starting point for the flights into the boiler and the Pitomnik airport (January 16, 1943) within the boiler were captured by the Red Army and supplies could only be carried out via the poorly prepared Gumrak field airfield . Most of the encircled soldiers died not as a result of fighting, but of malnutrition and hypothermia .
Another major problem for the soldiers and officers in the boiler was that the wounded had to be transported via these supply airfields. Especially after only the Gumrak makeshift airfield was available, the flight crews often had to use guns to keep the desperate from hanging on to the aircraft, which they did not always succeed in doing. So it happened that men, for example, held on to the chassis of the starting machines until they lost their strength and they fell.
At this time, the Soviet army made use of the work of German communists (including Walter Ulbricht , Erich Weinert and Willi Bredel ). The main task of the Soviet propaganda department at the time was to play 20 to 30-minute programs with music, poetry and propaganda on mobile gramophones and to broadcast them via huge loudspeakers. Among other things, the popular old hit song with the refrain “In my homeland, in my homeland, there's a reunion!” Was broadcast via these loudspeakers .
Other means of propaganda, including the slogan “Every seven seconds a German soldier dies. Stalingrad - mass grave . ”Which followed the monotonous ticking of a clock, and the so-called“ fatal tango music ”(Death Tango) provided an additional demoralization of the soldiers in the cauldron. Most of the propaganda broadcasts of this kind, however, initially led to increased shelling of the enemy positions on the orders of the German generals, so that a large part of the Soviet forces at these companies were killed. Due to the decline in German ammunition deliveries, however, this bombardment became weaker and weaker over time and it was hardly possible to "listen" as a result.
The supply of the trapped German troops with ammunition, supplies and food via an airlift was essential for the continued fighting in the boiler . The Inspector General of the Air Force Erhard Milch was commissioned by Adolf Hitler to guarantee it. For this purpose, the Ju 52 , led by Lufttransportführer 2, retrofitted bombers such as the He 111 as well as training and passenger aircraft of the types Ju 86 and Fw 200 were used. Even 27 of the four-engined He 177A-1 bomber of Kampfgeschwader 50 were used. Lufttransportführer 1, also known as Lufttransportführer Morozovskaya, was in charge of the He-111 units.
The delivery of the daily necessities of the army of at least 500 tons of supplies, promised by the commander in chief of the air force Hermann Göring , was never achieved. The highest daily output of 289 tons of goods was achieved with 154 aircraft on December 19, 1942 in good weather conditions.
In the first week from November 23, 1942, with an average of 30 flights per day, only a total of 350 tons of cargo were flown in, of which 14 tons were provisions for the 275,000 men in the boiler (this is 51 grams per person, which corresponds to two slices of bread). 75 percent of the cargo consisted of fuel for the return flight, for the tanks and for the Bf-109 escort fighters in the boiler . In the second week, a quarter of the required amount was transported with a total of 512 tons, of which only 24 tons were food. This meant that more draft animals had to be slaughtered to compensate for the lack of food. Since the troops that were still operational had priority in supply, the wounded and sick soon received no more food and fought bitterly for the last places in the transport machines.
From November 24, 1942 to January 31, 1943, the Luftwaffe suffered the following losses of transport aircraft on supply flights:
|Junkers Ju 52 / 3m||269|
|Heinkel He 111||169|
|Junkers Ju 86||42|
|Focke-Wulf Fw 200||9|
|Heinkel He 177||5|
|Junkers Ju 290||1|
The losses amounted to about 50% of the aircraft used. In order to compensate for the loss of pilots, the Air Force's training program was stopped in favor of the air supply to Stalingrad and the thus released, but actually irreplaceable trainers as transport pilots were burned. As the war continued, this led to a noticeable deterioration in the level of training for new pilots. In addition, enemy flights to other theaters of war were significantly reduced in order to save fuel for use in Stalingrad.
German relief attempt - "Enterprise Winter Storm"
To lead the units in and around Stalingrad, the new Don Army Group was formed from AOK 11 on November 26, 1942 under the leadership of Field Marshal Erich von Manstein with headquarters in Novotscherkask . A few days earlier, Manstein and Field Marshal von Weichs had been briefed on the difficult situation of the 6th Army at the headquarters of Army Group B in Starobelsk . Hitler had also prevented the outbreak because he wanted to maintain the prestige of “German soldiers standing on the Volga” and ordered three tank divisions to be sent to Stalingrad for relief. In addition to the enclosed 6th Army, the Don Army Group was assigned the 4th Panzer Army, including the remnants of the Romanian 4th Army, in the Kotelnikowo area . In addition there were the combat groups and alarm units of the XVII. Army corps on the Tschir section, as well as the remains of the Romanian 3rd Army. After the 7th Air Force Field Division at Nizhne Tschirskaja, which was supplied via Morovskaya, was completely destroyed in Soviet attacks, the newly formed Hollidt Army Department took over the defense on the Tschir. The Don bridgehead at Tschirskaja was held by the Tzschökell group and the Adam group , south of it the combat group secured von der Gablenz. To the west, on the southern bank of the Tschir, the 11th Panzer Division , the 336th Infantry Division as well as the Combat Group Stumpfeld and the Group Schmidt secured . The XXXXVIII acted as support. Panzer Corps, whose command was in Tormosin.
On December 12, 1942, the 4th Panzer Army under Colonel General Hoth launched the relief attack in "Operation Winter Storm" to relieve the 6th Army. First came the LVII. Panzer Corps (General der Panzertruppe Kirchner ) only with the 6th Panzer Division (General Raus ) and the 23rd Panzer Division (General Vormann). After the 17th Panzer Division (Lieutenant General von Senger and Etterlin ) arrived on the battlefield on December 17th, the southern bank of the Myshkova River was captured in battle. In addition, the 6th Army should have tried to break out of the pocket in the direction of the Hoths Army group under the keyword "Thunderbolt" in order to make the operation a success. Starting from Kotelnikowo south of Stalingrad, this relief attack was severely hindered 48 km before reaching the pocket by strong resistance from the Soviet 2nd Guards (Lieutenant General Rodion Malinowski ) and the 5th Shock Army as well as the 7th Panzer Corps (Major General Rotmistrow ). The Soviet major offensive, Operation Saturn , which was launched at the same time further northwest on the central Don and which led to the collapse of the Italian 8th Army on December 16 and thus threatened the entire Army Group South with the constriction, forced the immediate cessation of the relief of Stalingrad. In view of the poor condition of their own troops, the leadership around Paulus believed that the 6th Army's attempt to break out, called for by Manstein, was a "disaster solution". Hitler repeatedly refused to break out of the pocket, most recently on December 21, because the motorized units of the 6th Army had too little fuel to cover the route to Hoth's tank army. The relief attempt had to be stopped on December 23. The situation of the German soldiers and their allies was thus finally hopeless.
The "Operation Kolzo" and the end of the 6th Army
At the end of September 1942, on the orders of the Soviet High Command, the Don Front was formed by renaming the Stalingrad Front; Colonel General KK Rokossowski had received the supreme command. The stock initially included the 21st, 24th, 63rd, 65th and 66th Army and from January 1, 1943, the 57th, 62nd and 64th Armies joined the front, all of which were involved in the enclosure of the 6th Army. Army were involved. Despite the hopeless situation, Colonel General Paulus rejected the Soviet side's request to surrender on January 8, 1943.
On January 10, 1943, the armies of the Don Front began their last major offensive against the remnants of the 6th Army in Operation Kolzo (Russian: Ring). The aim was to "shatter" the Stalingrad pocket. On the one hand, the ring around the trapped was tightened, on the other hand, the immediate front moved further to the west, which cut off the 6th Army even further from its own troops. In the course of this, the Soviet troops also managed to capture the two airfields Pitomnik (January 16) and Gumrak (January 22). From then on, Wehrmacht aircraft took off and landed only at the “Stalingradski” emergency airport, until that too fell into Soviet hands and supplies could only be dropped over the boiler.
Finally, on January 25, the forces of the Wehrmacht were split into a southern and northern basin. On January 28, the north basin was split again into a central and a north basin.
On January 30, 1943, Paulus was promoted to Field Marshal General by radio message from the Führer Headquarters . Since no General Field Marshal of the Wehrmacht had been taken prisoner by then, Hitler wanted to exert additional pressure on Paulus with this promotion to keep the position under all circumstances - or to indirectly encourage him to commit suicide .
On the same day, an address to the German people from the hall of honor of the Reich Aviation Ministry in Berlin was announced. Since the “Führer” was deliberately never supposed to speak in connection with an outright defeat, the “second man of the Reich”, Goering, was appointed to prepare the Germans for this. The British knew about Goering's twelve o'clock appointment, which was broadcast on the radio, and ensured an embarrassing delay of an hour with a few high-speed bombers over the capital. From the speech formulas that had become generally transparent, the audience could then infer the hopeless situation of those trapped.
On the morning of January 31, Red Army troops broke into the “Univermag” department store, in whose basement the headquarters of the 6th Army was located. At 7:35 am, the radio station there gave its last two reports: “Russian is at the door. We are preparing for destruction. ”Shortly afterwards:“ We are destroying. ”After further attacks by the Red Army on the remaining German positions, Major General Roske, commander of the 71st Infantry Division , gave up in the southern basin. Immediately afterwards Major General Laskin, Chief of the General Staff of the 64th Soviet Army, came to the headquarters of the 6th Army, where the surrender negotiations began. On the same day, the Mittelkessel, commanded by Colonel General Heitz , also surrendered .
The Commander-in-Chief of the 6th Army, Paulus , who was also taken prisoner that day , was interrogated by the then Colonel General and later Marshal of the Soviet Union Konstantin Rokossowski on the night of February 1st. Hitler raged when he learned of the arrest of the Commander-in-Chief. Paul had expressly forbidden all officers to commit suicide on the grounds that they had to share the fate of their soldiers to go into captivity.
Operation Kolzo only finally came to an end with the cessation of the fighting in the northern basin, which - with the remnants of 21 German and two Romanian divisions that were barely capable of fighting and also completely undersupplied - and the infantry general Karl Strecker as commanding general - on February 2nd 1943 capitulated.
On February 3, around noon, the OKW read a special report on Großdeutscher Rundfunk in which it was stated that the 6th Army had fought “under the exemplary leadership of Paul to the last breath”, but with a “superior force” and “unfavorable” Succumbed to circumstances ". It was declared to be a historical “bulwark” of a not German, but “European army”, which was the representative of the fight against communism.
The claims of the Reichsrundfunksender culminated in the fact that all soldiers of the Sixth Army were killed. In the special report it was not mentioned that a total of 91,000 soldiers were taken prisoner of war, which the BBC had already reported and led to more people in Germany receiving their information from foreign " enemy broadcasters ". Goebbels, who launched this report, had been publicly exposed as a liar.
The Nazi regime ordered three days of national commemoration: bars, cinemas, etc. were closed, the radio only broadcast serious music . However, mourning flags were prohibited, and black frames were not allowed to appear in the press.
However, scattered units of the Wehrmacht fought in the Stalingrad area until March. An NKVD report noted an attack by German soldiers on March 5th as the last documented combat operation. Two Soviet soldiers were wounded in the attack. After a search operation, eight German officers were shot.
In many documentaries, stories and reports, memories of the Russian winter weather dominate, which prevailed after the sometimes traumatic experiences of the first winter on the Eastern Front during the fighting for Moscow. The weather during the second and third phases of the battle was not consistently cold and neither was it unusual. In addition to the strong frost phases (mainly towards the end of the battle), the visibility conditions and thus the flying weather were of military importance. During the bad weather phases, the visibility was sometimes so poor that either no pilots or only very experienced pilots were able to ascend, which further worsened the supply situation.
At the beginning of the Russian offensive there was only light frost and mostly poor visibility. After the enclosure, winter weather prevailed in the last week of November with snowfall and mostly light frosts. Shortly before the turn of the month, the thaw set in with rain, which made the paths difficult to pass.
A few days followed with changeable weather and repeated rain and snowfalls. The ice on the Volga was only continuous in the peripheral areas; the ice cover was not stable. Visibility was generally poor at the time. From December 10th it cleared up and there were no more dew phases during the day either. The frost was only moderate. Around December 14th there was a brief phase of thawing, followed by clearer weather with night frosts down to −15 ° C.
Shortly before Christmas then again poor visibility with changeable and sometimes light thaw. Heavy snowfall set in on Christmas Eve and on Christmas days the temperature fell to −30 ° C for the first time. However, it cleared up and there was good flying weather.
On New Year's Eve, a slight thaw set in again for 2–3 days before moderate frosts of around −15 ° C set in again on January 4th. Then again a little milder with short periods of dew. From January 11th, heavy snowfalls set in and as a result there was sometimes very heavy frost down to −30 ° C.
The military historian Rolf-Dieter Müller speaks of “enormous casualties” by the Soviet side in this battle: “According to official figures, the Stalingrad defense operation alone cost the Red Army 323,856 dead up to November 18, 1942, and 319,986 wounded men.” The military historian Gerd R. Ueberschär and Wolfram Wette emphasize "that the casualties of the Soviet army and the Stalingrad civilian population were much higher than the German casualties". They assume about "one million soldiers and an unknown number of civilians". While Stalingrad had almost half a million inhabitants when the war broke out, the city had fewer than 8,000 inhabitants when it was reconquered by the Red Army, according to the historian Jochen Hellbeck .
On the German side, Field Marshal Paulus went into captivity with his staff and a large number of generals. The amount of German losses is a matter of controversy. According to Rolf-Dieter Müller, the numbers are now slightly lower than earlier estimates. According to Müller, 195,000 German soldiers were initially surrounded (other numbers: 220,000). Of these, 60,000 died in the boiler, 25,000 wounded (other numbers: 40,000) were still flown out. 110,000 men were taken prisoner after Müller, of whom only 5,000 (other numbers: 6,000) returned after 1945; Most of the prisoners died within a few weeks and months due to "incompetence and lack of supply on the Soviet side". However, it must also be taken into account that the prisoners were in extremely poor condition. Almost all of them were completely malnourished, many suffered from frostbite and injuries, and since the German Air Force had destroyed all railway stations in the Soviet hinterland, the prisoners now had to cover long distances on foot, which was overwhelming for many. Bad hygienic conditions led to further illnesses. In particular, the spotted fever , which was rampant among the compatriots and transmitted by lice even before the surrender, claimed most of the victims in the prison camps. At the end of the Battle of Stalingrad, the carcasses of around 52,000 Wehrmacht horses lay in the ruins of the completely destroyed city .
In the discussions about Stalingrad it is argued again and again that the "victim" of the 6th Army, i.e. H. the conscious adherence to the militarily hopeless position was "necessary" to prevent even greater losses on other sections of the front. But not only was the war for the Germans de facto lost after the Battle of Moscow and the USA's entry into the war in the winter of 1941, but Hitler's decision to attack simultaneously in the Caucasus and Stalingrad was doomed to failure from the outset because the troops were undersupplied as a result and there was a lack of fast motorized units. Not only had the Red Army developed a more flexible and efficient defense strategy in the meantime, but by the end of September 1942 at the latest it was also evident that the troops in these regions would not be adequately supplied in winter. The situation of the 6th Army in Stalingrad was therefore untenable even before it was trapped in November 1942. The fact that Hitler ordered to persevere in this situation can be explained more by reasons of prestige and by his fear of withdrawals, and only partly by military considerations. The assertion that the sacrifice of the 6th Army near Stalingrad contributed to the prevention of the inclusion of Army Group A in the Caucasus and thus prevented an even greater catastrophe, is to be answered in the affirmative , in the opinion of Bernd Wegner, until mid-January. According to Wegner, however, the fact that Hitler's order to withdraw Army Group A came much too late on December 28, 1942 was not recognized. under certain circumstances even realistic preconditions for a liberation of the same can be created. ”Although the army command under Manstein insisted on a continuation of the fight in the Stalingrad basin in order to tie up Soviet troops out of concern for the inclusion of the units of Army Group A in the Caucasus. But even after Army Group A withdrew, Hitler forbade the cessation of the fighting.
For a long time, the Battle of Stalingrad was seen as the turning point of World War II. This is not least due to the symbolic quality of the event “Stalingrad”, which was already associated with Wagner's Götterdämmerung in Nazi propaganda , but was also staged by Stalin as a world historical moment. In Soviet military literature, too, the battle of Stalingrad is mostly portrayed as a decisive battle. Nikolai Ivanovich Krylow , Chief of Staff of the 62nd Army and later Marshal of the Soviet Union, stated that "the people in the countries invaded by Germany and the millions in the concentration camps (drew) first hope." Historical science followed this interpretation by one The turn of the war in 1943 largely continued until Andreas Hillgruber argued in favor of a turn of the war in 1941 in his book Hitler's Strategy (1965).
Other military historians now also doubt that the Wehrmacht could have won the war by early 1943. After the USA entered the war and the blitzkrieg strategy failed in front of Moscow in December 1941, a German victory is seen as unrealistic today. From a military point of view, the Stalingrad defeat did not mean a "turnaround" for the Second World War as a whole, but it did mean the final loss of the strategic initiative in the eastern theater of war. "In this respect," said the military historian Bernd Wegner , "the Stalingrad events really represented a 'point of no return'".
The battle of Stalingrad is seen primarily as a psychological turning point, which further weakened the Germans' confidence in the regime. For the first time, the German public was shown the possibility of defeat for the entire war. The number 1918 could therefore be read on many of the house walls in memory of the German defeat in the First World War . Domestically, Stalingrad was an occasion for many officers to join the military opposition to Hitler. Political opponents could hope again that the National Socialist dictatorship would one day come to an end. Soviet historiography has always emphasized moral superiority against attack in the so-called Great Patriotic War . Today's historians on all sides try not to blur the distinction between predatory and defensive wars in answering the question of what price was paid for each military operation.
In terms of foreign policy, neutral states allied with Germany began to prepare for a German defeat. Since then, Great Britain and the USA have expected that the Soviet Union will also be one of the victorious powers of World War II. The victory of the Red Army, which until then had borne the brunt of the resistance against National Socialist Germany, led to more intensive military efforts by the Western Allies and encouraged the establishment of a second front in the West. The Soviet Union "was now recognized in Washington and London as an equal partner in the war against Hitler-Germany". In addition, one had to recognize that the Soviet Union can also win the war on its own in case of doubt. This encouraged efforts to establish a second front in the west.
- 6th Army, 4th Panzer Army
- 5 General Commands of the IV. , VIII. , XI. , LI. Army Corps and the XIV Panzer Corps
- 14th , 16th and 24th Panzer Divisions
- 3rd , 29th and 60th Motorized Infantry Divisions
- 44th , 71st , 76th , 79th , 94th , 113th , 295th , 297th , 305th , 371st , 376th , 384th and 389th Infantry Division
- 100th Jäger Division and the Croatian Regiment 369
- Romanian 1st Cavalry Division and the Romanian 20th Infantry Division
- Assault Gun Department 177 and parts of Assault Gun Departments 243, 244 and 24
- 5 storm pioneer battalions: engineer battalion 162, 294, 305, 336 and 389
- various logistical units, anti-aircraft units and ground units of the air force
- Romanian 3rd Army
- Romanian 4th Army
- Italian 8th Army
- Hungarian 2nd Army
- Luftflotte 4 , consisting of the IV. And VIII. Air Corps
- 54 rifle divisions: 1, 10, 23, 24, 29, 38, 45, 49, 63, 64, 76, 84, 91, 95, 96, 99, 112, 116, 119, 120, 126, 138, 153, 157 , 159, 169, 173, 193, 196, 197, 203, 204, 226, 233, 244, 252, 258, 260, 266, 273, 277, 278, 284, 293, 299, 302, 303, 304, 308 , 321, 333, 343, 346, 422
- 12 guard divisions: 4, 13, 14, 15, 27, 34, 36, 37, 39, 40, 47, 50
- 2 marine infantry brigades: 92, 154
- 14 special brigades: 38, 42, 52, 66, 93, 96, 97, 115, 124, 143, 149, 152, 159, 160
- 4 tank corps: 1, 4, 16, 26
- 15 tank brigades: 1, 2, 6, 10, 13, 56, 58, 84, 85, 90, 121, 137, 189, 235, 254
- 3 mechanized corps: 1, 4, 13
- 3 cavalry corps: 3, 4, 89
- 4 air fleets: 8, 11, 16, 17
Honors and commemorations
The medal for the defense of Stalingrad was awarded to all members of the Soviet armed forces and civilians who were directly involved in the defense of Stalingrad from July 12 to November 19, 1942. As of January 1, 1995, this medal had been awarded 759,561 times. In the building of the Staff Unit No. 22220 in Volgograd, the huge mural is dominated by the representation of the medal. It shows a group of soldiers with their rifles pointing forward and bayonets attached to them under a waving flag. On the left you can see the outlines of tanks and an aircraft squadron, above the five-pointed Soviet star.
Discovered on April 18, 1972, the asteroid of the main outer belt (2250) Stalingrad was named after the Battle of Stalingrad.
Russian commemorative coins
On the occasion of the 50th anniversary of the end of the battle, a commemorative coin was issued in 1993 in honor of the city of Stalingrad with a face value of 3 rubles made of copper / nickel.
On the occasion of the celebration of the 55th anniversary of the end of the war, a coin in honor of the heroic city of Stalingrad was also released in 2000 as part of the Heroic Cities series . The coin with the inscription СТАЛИНГРАД - Stalingrad shows attacking soldiers and a heavy tank rolling forward in front of ruined houses.
Temporary renaming of the city of Volgograd to Stalingrad
70 years after the end of the Battle of Stalingrad, the city council of Volgograd decided at the end of January 2013 that the city should again bear its old name of Stalingrad on six days of remembrance a year . War veterans had applied for this . The decision sparked heated discussions in Russia. The Commissioner for Human Rights, Vladimir Lukin , condemned the temporary renaming and called it an "insult to the fallen of Stalingrad". They deserved recognition, "but not in this form". The communists in Russia are demanding a permanent return to the city's old name.
Memorial sites in Volgograd
- The premises of the last headquarters of the 6th Army in the basement of the “UniverMag” (TsUM) department store in Ploschad Pavshykh Borstov, where Paulus and his staff stayed before and after his capture, are used as a museum, 'Pamyat' State Museum ( Memorial).
- The Memorial Mother Homeland on Mamayev Hill with the 84 meter high Mother Homeland statue commemorates the costly battles for this strategically important hill.
- At the Heroes' Square is the entrance to the Hall of Fame, in which mourning flags document the names of the Soviet fallen.
- The place of the fallen warrior is a monument with eternal flame for the fallen Soviet soldiers. There are graves in several places. Wedding couples lay bouquets at the memorial in memory of the soldiers (soldier monument).
- War cemeteries in Rossoschka : In the vicinity of the former Gumrak airfield and next to the old, completely destroyed village of Rossoschka, a semicircular cemetery for Soviet soldiers was opened in 1997 and a circular cemetery for around 50,000 Germans in 1999 Inaugurated fallen from the Stalingrad area.
- Across from the ruins of the Grudinin mill in the city center, a lettering on the facade reminds of the conquest of this position by a Soviet soldier.
- The Museum of the Battle of Stalingrad was set up in a round building next to the Grudinin Mill , where the "Sword of Stalingrad" is also on display. Winston Churchill gave the sword to Stalin as a gift during the Tehran Conference on the evening of November 29, 1943. It is a ceremonial sword specially made in Sheffield "for the victor of the Battle of Stalingrad" , which King George VI. dedicated to the citizens of Stalingrad and all citizens of the Soviet Union.
Remembrance in Germany
- On October 18, 1964, the central German memorial was inaugurated at the main cemetery in Limburg an der Lahn to commemorate all soldiers who died in Stalingrad and who subsequently died in captivity. In 1988, the city of Limburg took over the “Stalingrad Fighters Foundation” and thus ensured the preservation and maintenance of the Stalingrad Memorial, even through the existence of the “Bundestag Former Stalingrad Fighters”. V. Germany ”. The federal government decided to dissolve it in 2004.
- For many people, one image remains associated with the Battle of Stalingrad: that of the Madonna of Stalingrad . The Christmas 1942 by the Protestant pastor, doctor and artist Kurt Reuber in a shelter in Stalingrad with charcoal on the back of a Soviet map bears the inscription "1942 Christmas in the cauldron - fortress Stalingrad - light, life, love". While Reuber himself did not survive the imprisonment, the picture ended up in the hands of the family in one of the last aircraft, which, at the suggestion of Federal President Karl Carstens , handed it over to the Kaiser Wilhelm Memorial Church in Berlin in 1983 to commemorate the fallen and as a warning for peace . In the church (on the wall behind the rows of chairs on the right) hangs a picture of Mary that encourages remembrance and prayer. The Madonna forms the motif in the coat of arms of the medical regiment 2 of the medical service of the Bundeswehr.
Commemoration in Austria
Every year in February in Austria, Stalingrad memorial masses take place in many churches, which are usually organized by the Austrian Comradeship Association or other traditional associations. Furthermore, numerous objects from the battle are exhibited in the Vienna Army History Museum , including a. also war relics such as steel helmets, boots and pieces of equipment that were recovered from the battlefield of Stalingrad.
Commemoration in France
There is a metro station Stalingrad in Paris . It is located on the Place de la Bataille-de-Stalingrad .
Commemoration in Italy
In Italy, streets are called Via Stalingrado in several cities .
- Antony Beevor : Stalingrad . Orbis-Verlag, Niedernhausen 2002, ISBN 3-572-01312-7 .
- Christoph Birnbaum: It's like a miracle that I'm still alive. Field post letters from Stalingrad, 1942-43 . Edition Lempertz, Königswinter2012, ISBN 978-3-939284-38-3 . (in cooperation with the Museum for Communication Berlin )
- William E. Craig: The Battle of Stalingrad. Factual report. Heyne, Munich 1991, ISBN 3-453-00787-5 .
- Torsten Diedrich : Stalingrad 1942/43 . Reclam, Stuttgart 2018, ISBN 978-3-15-011162-8 .
- Jens Ebert (ed.): Field post letters from Stalingrad. Wallstein-Verlag, Göttingen 2003, ISBN 3-89244-677-6 .
- Jürgen Förster (Ed.): Stalingrad. Event, effect, symbol. Piper, Munich 1992, ISBN 3-492-11618-3 .
- Jörg Füllgrabe: "We call Stalingrad". The Nazi myth of the heroic fall of the 6th Army - continuities and breaks in German post-war literature , in: Jens Westemeier (ed.): "So war der Deutschen Landser ...". The popular picture of the Wehrmacht , pp. 123-138, Paderborn (Ferdinand Schöningh) 2019. ISBN 3-506-78770-5
- David M. Glantz , Jonathan M. House: The Stalingrad Trilogy. Volume 2: Armageddon in Stalingrad. September – November 1942. University Press of Kansas, Lawrence, KA 2009. = Modern War Studies, ISBN 978-0-7006-1664-0 .
- Jochen Hellbeck : The Stalingrad Protocols. Soviet eyewitnesses report from the battle. Translation of the minutes from Russian by Christiane Körner and Annelore Nitschke. Fischer Verlag, Frankfurt am Main 2012, ISBN 978-3-10-030213-7 .
- Manfred Kehrig: Stalingrad. Analysis and documentation of a battle. DVA, Stuttgart 1979, ISBN 3-421-01653-4 .
- Nikolai Krylov : Stalingradskij Rubez Stalingrad-The decisive battle of the Second World War. Pahl-Rugenstein, Cologne 1981, ISBN 3-7609-0624-9 .
- Michael Kumpfmüller : The Battle of Stalingrad. Metamorphoses of a German Myth . Wilhelm Fink Verlag, Munich 1995, ISBN 3-7705-3078-0 .
- Kurt Pätzold : Stalingrad and no turning back. Delusion and reality . Militzke Verlag, Leipzig 2002, ISBN 3-86189-275-8 .
- Carl Schüddekopf: In the boiler. Telling of Stalingrad. 3. Edition. Piper, Munich 2004, ISBN 3-492-24032-1 .
- Wassili Iwanowitsch Tschuikow : The battle of the century . Military publishing house of the GDR , Berlin 1988, ISBN 3-327-00637-7 .
- Bernd Ulrich : Stalingrad. Verlag CH Beck, Munich 2005, ISBN 3-406-50868-5 .
- Wolfram Wette , Gerd R. Ueberschär (Ed.): Stalingrad. Myth and Reality of a Battle . extended new edition, also 5th edition. Fischer, Frankfurt am Main 2012, ISBN 978-3-596-19511-4 .
- Wilhelm Adam : The difficult decision , Verlag der Nation, Berlin, 6th edition 1965.
- Heinrich Gerlach : The betrayed army. The Stalingrad novel . Bechtermünz-Verlag, Augsburg 2000, ISBN 3-8289-6633-0 .
- Wassili Grossman : Turn on the Volga . Dietz Verlag, Berlin 1958.
- Wassili Grossman: Life and Fate. Roman (Russian Жизнь и судьба, 1959). Claassen Verlag, Berlin 2007, ISBN 978-3-546-00415-2 .
- Walter Kempowski : The echo sounder. A collective diary. January and February 1943 . 4 volumes. Knaus, Munich 1993, ISBN 3-8135-2099-4 .
- Alexander Kluge : Description of the battle. Walter, Olten / Freiburg im Breisgau 1964. Other edition: Suhrkamp, Frankfurt am Main 1997, ISBN 3-518-11193-0 .
- Walter Naumann: Stalingrad must be held ... A novel that was written while a prisoner of war in the Urals . Edited by Eva Krack, Günter Leikauf, Carla Raschke. Published in “Narrating is remembering”, series of publications by the Volksbund Deutsche Kriegsgräberfürsorge eV Volume 113. Kassel 2013. ISBN 978-3-936592-34-4 .
- Viktor Nekrasov : Stalingrad. 3. Edition. Aufbau-Taschenbuchverlag, Berlin 2003, ISBN 3-7466-1842-8 .
- Theodor Plievier : Stalingrad . Parkland-Verlag, Cologne 2003, ISBN 3-89340-074-5 .
- Fritz Wöss : Dogs, do you want to live forever Belle Époque Verlag, Tübingen 2017, ISBN 978-3-945796-82-5 . (First published in autumn 1957; filmed in 1959)
- Konstantin Simonow : Days and Nights. Verlag Volk und Welt, Berlin 1948 (German language edition), LN 302, 410/179/81 ´, 6th edition 1981
Films about the Battle of Stalingrad
- 1949 The Battle of Stalingrad , USSR ( Mosfilm )
- 1972 documentary Lettres de Stalingrad by Jacqueline Veuve
- 2002 RBB documentary Stalingrad by director Christian Klemke and co-author Jan N. Lorenzen
- 2006 Documentary film Stalingrad by Sebastian Dehnhardt under the direction of Guido Knopp
- 2006 documentary film The Battle of Stalingrad. Great Britain 2006, German synchronization on behalf of N 24 in 2010. Shown in N 24 on December 29, 2014, 10:05 pm - 11:05 pm. (Procedure from September 1942 to January 1943, snipers, house-to-house fighting , evasion in the sewer system, Soviet counter-offensive, boiler, Soviet breakthrough on the Don)
The battle for Stalingrad was implemented in several films, some of which were propagandistic. Films that strive for objectivity and deal with the cruelty of war in general are:
- 1958 dogs, do you want to live forever? - Director: Frank Wisbar
- 1963 Stalingrad , TV film - director: Gustav Burmester , with Wolfgang Büttner , Hanns Lothar
- 1967 You are not born a soldier - Director: Alexander Stolper
- 1969 Letters from Stalingrad - Director: Gilles Katz
- 1993 Stalingrad - Director: Joseph Vilsmaier
- 2001 Duel - Enemy at the Gates - Director: Jean-Jacques Annaud
- 2008 Appassionata - Director: Mirko Echghi-Ghamsari
- 2013 Stalingrad - Director: Fyodor Bondarchuk
Contemporary witnesses in the film
- 2008 Stalingrad - Volgograd. Encounters in the city of fate. Report. Hanse TV on behalf of NDR and rbb. Repetition shown in BR-alpha on February 3rd, 2010, 7:30 p.m. to 8:15 p.m. ( Contemporary witness Horst Zank, who was captured by the Soviets and survived, visits his old positions on the Don and Volga, the Soviet war memorials, the German-Russian war cemetery at Rossoshka and exchanges views with Russian veterans and the Russian population about peace as a lesson from the Past.)
- Silent night in Stalingrad. In: ZDFzeit . Shown on ZDF television on December 11, 2012, from 8:15 pm to 9:00 pm. (Course of events, Russian and German contemporary witnesses, Stalingrad Madonna , affected family members. Partial filmic reconstructions.)
- Stalingrad - The way of the 6th Army to Stalingrad, the enclosure and the failed relief attempt Sept. 1941 - 31 Dec. 1942 . Historical pictures and documents from the Federal Archives
- Literature on the battle of Stalingrad in the catalog of the German National Library
- Wolfram Bet: The Myth of a Battle. In: The time . No. 5, 2003.
- 70 years ago: Battle of Stalingrad. In: RIA Novosti . 17th July 2012.
- Geoffrey Jukes: Stalingrad - The turning point in World War II , London / New York 1968
- Website of the Volksbund Deutsche Kriegsgräberfürsorge (work of the Volksbund and description of the war cemetery in Rossoschka)
- The name dice of the missing in Volgograd-Rossoshka . In: Voice & Way . Edition 4/2006. (PDF; 1.14 MB) ( volksbund.de ( Memento from January 12, 2012 in the Internet Archive ))
- Stalingrad in German school history books (by Wigbert Benz from "Enterprise Barbarossa"; Volume 3 (2003): Edition of historical contributions to the war against the Soviet Union 1941–1945; Ed .: Historisches Centrum Hagen)
- H-Museum: Stalingrad / Volgograd 1943–2003. Memory (English and German version)
- Oliver von Wrochem: Remembering Stalingrad. To historicize a myth . In: Zeithistorische Forschungen / Studies in Contemporary History 1 (2004), pp. 310–317.
- Richard Overy: Russian War. Rowohlt Verlag, 2004, ISBN 3-498-05032-X , p. 286; Torsten Diedrich: Stalingrad 1942/43 . Reclam, Stuttgart 2018, ISBN 978-3-15-011162-8 , p. 149.
- Aleksandr Michailowitsch Samsonow: Stalingradskaja Bitwa. Isdvo Akademii Nauk, Moscow 1960, p. 257.
- Richard Overy: Russian War. Rowohlt Verlag, 2004, ISBN 3-498-05032-X , p. 249.
- Matthew Cooper: The Air Force 1933-1945: A Chronicle. Motorbuchverlag, Stuttgart 1988, ISBN 3-613-01017-8 , p. 259.
- Matthew Cooper: The Air Force 1933-1945: A Chronicle. Motorbuchverlag, Stuttgart 1988, ISBN 3-613-01017-8 , p. 264.
- Heinz Schröter: Stalingrad. To the last cartridge. Klein's printing and publishing company, 1945, p. 121.
- Otto Heinrich Kühner: Wahn und Untergang, 1939–1945. Deutsche Verlags-Anstalt , Stuttgart 1957, p. 164.
- Matthew Cooper: The Air Force 1933-1945: A Chronicle. Motorbuchverlag, Stuttgart 1988, ISBN 3-613-01017-8 , p. 264.
- Bernd Ulrich: Stalingrad . CH Beck, Munich 2005, p. 90.
- So in: Manfred Griehl, Joachim Dressel: Heinkel He 177-277-274. A documentation of aviation history. Motorbuch Verlag, Stuttgart 1989, p. 81.
- Bernd Wegner : The war against the Soviet Union 1942/43. In: The German Reich and the Second World War . Volume 6: The global war - the expansion to the world war and the change of the initiative 1941 to 1943. Deutsche Verlags-Anstalt, Stuttgart 1990, p. 1048.
- Ian Kershaw: Hitler. 1936-1945. DVA, Stuttgart 2000, pp. 715f.
- Wolfgang Benz , Hermann Graml , Hermann Weiß (eds.): Encyclopedia of National Socialism . 5th, updated and expanded edition. dtv, Stuttgart 2007, p. 746.
- Antony Beevor: Stalingrad. Goldmann Verlag, Munich 2001, ISBN 3-442-15101-5 , p. 454.
- Cf. Antony Beevor: The Second World War. Munich 2014, p. 457.
- Hellbeck: The Stalingrad Protocols. 2012, p. 276.
- Excerpts from the war diary of the 6th Army:
- Rolf-Dieter Müller: The last German war. 1939-1945. Stuttgart 2005, p. 174.
- Wolfram Wette , Gerd R. Ueberschär (Ed.): Stalingrad. Myth and Reality of a Battle . extended new edition at the same time 5th edition. Fischer, Frankfurt am Main 2012, ISBN 978-3-596-19511-4 , p. 15.
- Jochen Hellbeck: The Stalingrad Protocols. Soviet eyewitnesses report from the battle. S. Fischer, Frankfurt am Main 2012, p. 13 and p. 19.
- Rolf-Dieter Müller The Last German War. 1939-1945. Stuttgart 2005, p. 176.
- Wolfgang U. Eckart : Illness and wounding in the Stalingrad pocket , in: Wolfgang U. Eckart and Alexander Neumann (eds.): Medicine in the Second World War. Military medical practice and medical science in the "total war", Schöningh Paderborn 2006, pp. 69-92, ISBN 978-3-506-75652-7 , Eckart: Medicine in the Second World War .
- Manfred Hettling: Perpetrator and victim? The German soldiers in Stalingrad. In: Archives for Social History. 35 (1995), pp. 518f.
- Bernd Wegner: The war against the Soviet Union 1942/43. In: The German Reich and the Second World War . Volume 6: The global war - the expansion to the world war and the change of initiative 1941 to 1943. Deutsche Verlags-Anstalt, Stuttgart 1990, p. 1063.
- Bernd Wegner: The Myth "Stalingrad" (November 19, 1942– February 2, 1943). In: Gerd Krumeich, Susanne Brandt (eds.): Battle myths. Event - narration - memory. Böhlau, Cologne 2003, p. 184.
- Nikolai Krylow: Stalingradskij Rubez Stalingrad - The decisive battle of the Second World War. Pahl-Rugenstein, Cologne 1981, ISBN 3-7609-0624-9 , p. 1.
- Michael Salewski: Kriegswenden: 1941, 1942, 1944. In: Communications of the joint commission for research into the recent history of German-Russian relations. 2 (2005), pp. 97f.
- Bernd Wegner: The war against the Soviet Union 1942/43. In: The German Reich and the Second World War . Volume 6: The global war - the expansion to the world war and the change of initiative 1941 to 1943 . Deutsche Verlags-Anstalt, Stuttgart 1990, p. 1102.
- Jürgen Förster : Tough Legends. Stalingrad, August 23, 1942 to February 2, 1943. In: Stig Förster, Markus Pöhlmann , Dierk Walter (eds.): Battles of world history. From Salamis to Sinai . dtv, Munich 2004, ISBN 3-423-34083-5 , pp. 325–337, here p. 335; Jörg Echternkamp : The 101 most important questions. The second World War. CH Beck, Munich 2010, p. 42.
- Jürgen Förster: Tough Legends. Stalingrad, August 23, 1942 to February 2, 1943. In: Stig Förster, Markus Pöhlmann, Dierk Walter (eds.): Battles of world history. From Salamis to Sinai . dtv, Munich 2004, p. 337.
- Lutz D. Schmadel : Dictionary of Minor Planet Names . Fifth Revised and Enlarged Edition. Ed .: Lutz D. Schmadel. 5th edition. Springer Verlag , Berlin , Heidelberg 2003, ISBN 978-3-540-29925-7 , pp. 183 (English, 992 pp., Link.springer.com [ONLINE; accessed November 1, 2017] Original title: Named in commemoration of the fierce battle for the city. ).
- Controversial memorial campaign in Russia: Volgograd will briefly be called Stalingrad again. on: Focus Online. January 31, 2013.
- Stalingrad monument at the main cemetery. on: limburg.de
- Army History Museum / Military History Institute (ed.): The Army History Museum in the Vienna Arsenal . Verlag Militaria , Vienna 2016, ISBN 978-3-902551-69-6 , p. 142.
- see Via Stalingrado in Italy
- The Battle of Stalingrad - Part 1 in the Internet Movie Database (English), The Battle of Stalingrad - Part 2 in the Internet Movie Database (English)
- Classics of the German television game. Stalingrad . TV broadcast information. The crime thriller homepage , accessed on April 10, 2020.
|
Jacqualine Aicha April 7, 2021 Math Worksheet
If you do want to produce your own worksheets and don‘t have the Microsoft software, you can download free tools like OpenOffice or use an online word processor or spreadsheet such as the free Google Docs which help you do similar tasks. You just need to create a table with as many rows and columns as you need and then type in some numbers before printing it off for your children to practice - depending on the level of complexity choose single digits or multiple digits. If you‘re not sure what level to start at, aim low, start with easy numbers and see how your child goes, the self-esteem boost they‘ll get from acing the first worksheet will give them confidence for more difficult math problems.
So, what it takes to be smart in mathematics? My answer is; stay focused on math in each and every level of your studies. Participate in your class math practice sessions. Ask your teacher lots of questions until you are not clear about any concept. Mathematics is a subject of solving the problems on paper by hand rather than only to read them. As in case of Social Studies taking more readings make you smart, in math practicing lots of problems and solving them by hand makes you smart.
The importance of teaching our children math simply can‘t be underestimated. Basic math skills such as understanding addition, subtraction, multiplication, division, fractions, and percentages are all basic life skills. We use these skills every day, often without even thinking about it. The question, then, isn‘t why our children should learn math, but what we can do to help them learn. Children need our encouragement. They need the help and guidance of family members. They also need numerous opportunities to practice.
Instead of learning a topic and then doing lots of mathematical examples, based on what you have just learned, teachers have found that the use of interactive activities, learning games, printable worksheets, assessments, and reinforcement. the math curriculum should rely on many learning tools - lessons with activities, worksheets, reinforcement exercises, and assessments will help a student to learn each math topic in a variety of ways and this should help supplement the teaching in class.
Valentine‘s Day is a great opportunity to give your kids a fun and engaging math activity. There are a number of themed activities and math worksheets available through a quick Internet search. Here‘s how you can use these resources to give the kids a math lesson that they‘ll enjoy. You can create a number of other fun activities to bring out the inner-mathematician in a child. For example, if you want to teach kids about the basic ideas of volume and surface area, then Valentine‘s Day could really lend a hand. Fill three glass jars that differ in size with some heart-shaped candy. Ask the kids to estimate how many sweet treats they think each jar contains.
Once downloaded, you can customize the math worksheet to suit your kid. The level of the child in school will determine the look and content of the worksheet. Use the school textbook that your child uses at school as a reference guide to help you in the creation of the math worksheet. This will ensure that the worksheet is totally relevant to the kid and will help the child improve his or her grades in school.
Tag CloudFree Touchpoint Math Worksheets Math Worksheets 2nd Grade Printable Patterns Math Worksheets Free Homeschool Math Worksheets Math Addition Worksheets 1st Grade Free Kumon Math Worksheets Math 2nd Grade Worksheet 6th Grade Math Word Problems Worksheet Common Core Math Worksheet Volume Math Worksheets Snowflake Math Worksheets Valentine Math Worksheets Maths Long Division Worksheets 5 Grade Math Worksheets Halloween Math Worksheets 4th Grade 5th Std Maths Worksheet Maths Fun Worksheets Math Worksheets 5th Grade Fractions Math Worksheet Pdf 6th Math Worksheets Puzzle Time Math Worksheets Maths Printable Worksheets Ks1 Free Math Worksheets Grade 2 Math Worksheets For 8th Grade Pre Algebra Math Worksheet With Answers Star Wars Math Worksheets Soft Math Worksheets Free Math Worksheets For Middle School Maths Multiplication Worksheets For Class 2 A Level Maths Worksheets Free Printable Math Worksheets For 4th Grade Multiplication 5th Grade Math Division Worksheets Year 5 Maths Worksheets Printable Maths Worksheets For Year 1 Free Maths Worksheets For Grade 2 Math Worksheets For 5th Grade Multiplication 8th Grade Math Worksheets Free Free Printable Math Worksheets For Grade 2 Free Maths Worksheets Ks3 Math Worksheets Multiplication 100 Problems Free 1st Grade Math Worksheets Core Math Worksheets 5th Grade Math Worksheets Decimals Sat Prep Math Worksheets Maths Worksheet For Grade 3 Maths For Year 5 Worksheets Free Year 6 Maths Worksheets Math Worksheets Kindergarten Addition Math Worksheets Grade 4 Multiplication Math Worksheets Grade 3 Math Worksheet Generator Multiplication Fifth Grade Math Worksheets Spider Math Worksheets Free Math Worksheets For Kindergarten Free Math Worksheets For 3rd Grade Multiplication Printable Math Worksheets Grade 5 Math Multiples Worksheets Maths Worksheets For Grade 5 Math Counting Worksheets Math Worksheets To Do Online 3rd Grade Math Fractions Worksheets Free Printable Math Worksheets Grade 2 Create Maths Worksheets 6th Grade Math Ratio Worksheets Schoolexpress Com Math Worksheets Worksheets Of Maths For Class 4 Math Worksheets For 2nd Grade Free Havefunteaching Com Math Worksheets Ged Math Worksheets Math Printable Worksheets For 6th Grade Math Worksheets For Grade 1 Kindergarten Halloween Math Worksheets Maths Multiplication Worksheet Math Facts Addition And Subtraction Worksheets Year 7 Maths Worksheets Printable 5th Grade Math Measurement Worksheets Maths Ks3 Worksheets Math Worksheets For 5 Grade Common Core Grade 5 Math Worksheets Second Grade Math Worksheets Printable Math Counting Worksheets Kindergarten Fifth Grade Math Practice Worksheets Free Pre K Math Worksheets Maths Worksheets For Primary 1 Grade Seven Math Worksheets Print Math Worksheets 4th Grade Math Kindergarten Worksheet Math Worksheets Fractions To Decimals Math Printable Worksheets For Kindergarten Pizzazz Math Worksheets Answers Printable Math Worksheets For 8th Grade Free Math Worksheets For Kids Everyday Math 4th Grade Worksheets Math For Fifth Grade Worksheets Math For 6 Graders Worksheets Math Plus Worksheets Third Grade Math Fractions Worksheets Printable Worksheets For 4th Grade Math Yr 2 Maths Worksheets Kindergarten Math Worksheets Common Core Maths 2d Shapes Worksheets Simple Addition Math Worksheets Math Variables Worksheet Math Worksheets For Grade 5 Multiplication Halloween Math Worksheets Math Decimals Worksheet Math Worksheet For Grade 6 Foundation Stage Maths Worksheets Math Multiplication Worksheets Worksheet Math Maths Year 7 Worksheets Worksheets Of Math Printable Math Worksheets 2nd Grade Fifth Grade Math Worksheets Printable Menu Math Worksheet Statistics Math Worksheets Math For Preschoolers Worksheets Math Proportions Worksheets High School Math Worksheets Pdf Free Printable Money Math Worksheets Maths Worksheets For Class 4 6 Year Old Maths Worksheets Free Math Worksheets Generator Fun Maths Worksheets Ks2 Consumer Maths Worksheets Common Core First Grade Math Worksheets Grade 3 Math Worksheets Printable First Grade Math Worksheets Money Year 2 Maths Worksheets Minute Maths Worksheets Common Core Math Worksheets 5th Grade Math Fact Family Worksheets Math Worksheets For Grade 4 Multiplication Act Math Worksheets Math Worksheets Perimeter Math Worksheets Exponents Fun Math Worksheets For 6th Grade Fun Math Activity Worksheets Math Addition Worksheets Kindergarten Math Worksheet For Kindergarten Printable Christmas Math Worksheets Worksheets For Math Printable 1st Grade Math Worksheets Maths Worksheets For Preschoolers Math Worksheets For Preschoolers Addition Grade 5 Math Word Problems Worksheets Maths Worksheets Year 5 Easy Math Worksheets Printable Minute Math Worksheets Math Grouping Worksheets Free Printable Christmas Math Worksheets Maths Measuring Worksheets Math Worksheets Order Of Operations Math Worksheets For Grade 3 And 4 Fraction Maths Worksheets Printable Maths Worksheets For Grade 2 Math Worksheets Basic Math Addition Worksheets Kindergarten Maths Worksheets Common Core Worksheets Math Kindergarten Math Worksheet Christmas Math Worksheets For First Grade Math For Kindergarten Worksheet Maths Worksheets For Year 2 Free Math Worksheets For Kindergarten Addition Maths Worksheets Ks4 Free Math Worksheets Fractions Maths Worksheet Creator 3rd Grade Math Worksheets Pdf 2nd Grade Math Worksheets Addition And Subtraction 100 Day Math Worksheets Math Input Output Tables Worksheets Math Grade 3 Worksheets Maths Worksheets For 4 Year Olds 5th Grade Fun Math Worksheets Maths Worksheets For Year 4 Free Printables Math Worksheets Grade 8 Printable Math Facts Worksheets 3rd Grade Math Worksheets Algebra 2 2nd Grade Math Worksheets Subtraction With Regrouping 4th And 5th Grade Math Worksheets Math Revision Worksheets Math Calendar Worksheets Math Number Pattern Worksheets Extra Math Worksheets Multiplication Maths Worksheets First Grade Math Worksheets Printable Printable Maths Worksheets For Kids Carpentry Math Worksheets Math Worksheets Grade 5 Printable Worksheet For Class 1 Maths Worksheets For 2nd Grade Math Transformation Maths Worksheets Reception Maths Worksheets Kindergarten Math Worksheets Addition And Subtraction Free Math Worksheets Grade 4 First Grade Free Math Worksheets Math Worksheets For 6th Graders Printable Math Worksheet Generator Algebra Math Worksheets Adding Fractions
|
Polar motion of the Earth is the motion of the Earth's rotational axis relative to its crust.:1 This is measured with respect to a reference frame in which the solid Earth is fixed (a so-called Earth-centered, Earth-fixed or ECEF reference frame). This variation is only a few meters.
Polar motion is defined relative to a conventionally defined reference axis, the CIO (Conventional International Origin), being the pole's average location over the year 1900. It consists of three major components: a free oscillation called Chandler wobble with a period of about 435 days, an annual oscillation, and an irregular drift in the direction of the 80th meridian west, which has lately been shifted toward the east.:1
The mean displacement far exceeds the magnitude of the wobbles. This can lead to errors in software for Earth observing spacecraft, since analysts may read off a 5-meter circular motion and ignore it, while a 20-meter offset exists, fouling the accuracy of the calculated latitude and longitude. The latter are determined based on the International Terrestrial Reference System, which follows the polar motion.
The slow drift, about 20 m since 1900, is partly due to motions in the Earth's core and mantle, and partly to the redistribution of water mass as the Greenland ice sheet melts, and to isostatic rebound, i.e. the slow rise of land that was formerly burdened with ice sheets or glaciers.:2 The drift is roughly along the 80th meridian west. However, since about year 2000, the pole has found new direction of drift, which is roughly along the central meridian. This dramatic eastward shift in drift direction is attributed to the global scale mass transport between the oceans and the continents.:2
Major earthquakes cause abrupt polar motion by altering the volume distribution of the Earth's solid mass. These shifts, however, are quite small in magnitude relative to the long-term core/mantle and isostatic rebound components of polar motion.
In the absence of external torques, the vector of the angular momentum M of a rotating system remains constant and is directed toward a fixed point in space. In the case of the Earth, it is almost identical with its axis of rotation. The vector of the figure axis F of the system wobbles around M. This motion is called Euler's free nutation. For a rigid Earth which is an oblate spheroid to a good approximation, the figure axis F is its geometric axis defined by the geographic north and south pole. It is identical with the axis of its polar moment of inertia. The Euler period of free nutation is
(1) τE = 1/νE = A/(C − A) sidereal days ≈ 307 sidereal days ≈ 0.84 sidereal years
νE = 1.19 is the normalized Euler frequency (in units of reciprocal years), C = 8.04 × 1037 kg m2 is the polar moment of inertia of the Earth, A is its mean equatorial moment of inertia, and C - A = 2.61 × 1035 kg m2.
The observed angle between M and F is a few hundred milliarcseconds (mas) which gives rise to a surface displacement of several meters (100 mas corresponds to 3.09 m) between the figure axis of the Earth and its angular momentum. Using the geometric axis as the primary axis of a new body-fixed coordinate system, one arrives at the Euler equation of a gyroscope describing the apparent motion of the rotation axis about the geometric axis of the Earth. This is the so-called polar motion.
Observations show that the figure axis exhibits an annual wobble forced by surface mass displacement via atmospheric and/or ocean dynamics, while the free nutation is much larger than the Euler period and of the order of 435 to 445 sidereal days. This observed free nutation is called Chandler wobble. There exist, in addition, polar motions with smaller periods of the order of decades. Finally, a secular polar drift of about 0.10 m per year in the direction of 80° west has been observed which is due to mass redistribution within the Earth's interior by continental drift, and/or slow motions within mantle and core which gives rise to changes of the moment of inertia.
The annual variation was discovered by Karl Friedrich Küstner in 1885 by exact measurements of the variation of the latitude of stars, while S.C. Chandler found the free nutation in 1891. Both periods superpose, giving rise to a beat frequency with a period of about 5 to 8 years (see Figure 1).
This polar motion should not be confused with the changing direction of the Earth's spin axis relative to the stars with different periods, caused mostly by the torques on the Geoid due to the gravitational attraction of the Moon and Sun. They are also called nutations, except for the slowest, which is the precession of the equinoxes.
Polar motion is observed routinely by very-long-baseline interferometry, lunar laser ranging and satellite laser ranging. The annual component is rather constant in amplitude, and its frequency varies by not more than 1 to 2%. The amplitude of the Chandler wobble, however, varies by a factor of three, and its frequency by up to 7%. Its maximum amplitude during the last 100 years never exceeded 230 mas.
The Chandler wobble is usually considered a resonance phenomenon, a free nutation that is excited by a source and then dies away with a time constant τD of the order of 100 years. It is a measure of the elastic reaction of the Earth. It is also the explanation for the deviation of the Chandler period from the Euler period. However, rather than dying away, the Chandler wobble, continuously observed for more than 100 years, varies in amplitude and shows a sometimes rapid frequency shift within a few years. This reciprocal behavior between amplitude and frequency has been described by the empirical formula:
(2) m = 3.7/(ν - 0.816) (for 0.83 < ν < 0.9)
with m the observed amplitude (in units of mas), and ν the frequency (in units of reciprocal sidereal years) of the Chandler wobble. In order to generate the Chandler wobble, recurring excitation is necessary. Seismic activity, groundwater movement, snow load, or atmospheric interannual dynamics have been suggested as such recurring forces, e.g. Atmospheric excitation seems to be the most likely candidate. Others propose a combination of atmospheric and oceanic processes, with the dominant excitation mechanism being ocean‐bottom pressure fluctuations.
Current and historic polar motion data is available from the International Earth Rotation and Reference Systems Service Earth Orientation products. Note in using this data that the convention is to define px to be positive along 0° longitude and py to be positive along 90°W longitude.
There is now general agreement that the annual component of polar motion is a forced motion excited predominantly by atmospheric dynamics. There exist two external forces to excite polar motion: atmospheric winds, and pressure loading. The main component is pressure forcing, which is a standing wave of the form:
(3) p = poΘ−31(θ) cos[(2πνA (t - to)] cos(λ - λo)
with po a pressure amplitude, Θ−31 a Hough function describing the latitude distribution of the atmospheric pressure on the ground, θ the geographic co-latitude, t the time of year, to a time delay, νA = 1.003 the normalized frequency of one solar year, λ the longitude, and λo the longitude of maximum pressure. The Hough function in a first approximation is proportional to sinθ cosθ. Such standing wave represents the seasonally varying spatial difference of the Earth's surface pressure. In northern winter, there is a pressure high over the North Atlantic Ocean and a pressure low over Siberia with temperature differences of the order of 50°, and vice versa in summer, thus an unbalanced mass distribution on the surface of the Earth. The position of the vector m of the annual component describes an ellipse (Figure 2). The calculated ratio between major and minor axis of the ellipse is
(4) m1/m2 =νC
where νC is the Chandler resonance frequency. The result is in good agreement with the observations. From Figure 2 together with eq.(4), one obtains νC = 0.83, corresponding to a Chandler resonance period of
(5) τC = 441 sidereal days = 1.20 sidereal years
po = 2.2 hPa, λo = - 170° the latitude of maximum pressure, and to = - 0.07 years = - 25 days.
It is difficult to estimate the effect of the ocean, which may slightly increase the value of maximum ground pressure necessary to generate the annual wobble. This ocean effect has been estimated to be of the order of 5–10%.
It is improbable that the internal parameters of the Earth responsible for the Chandler wobble would be time dependent on such short time intervals. Moreover, the observed stability of the annual component argues against any hypothesis of a variable Chandler resonance frequency. One possible explanation for the observed frequency-amplitude behavior would be a forced, but slowly changing quasi-periodic excitation by interannually varying atmospheric dynamics. Indeed, a quasi-14 month period has been found in coupled ocean-atmosphere general circulation models, and a regional 14-month signal in regional sea surface temperature has been observed.
To describe such behavior theoretically, one starts with the Euler equation with pressure loading as in eq.(3), however now with a slowly changing frequency ν, and replaces the frequency ν by a complex frequency ν + iνD, where νD simulates dissipation due to the elastic reaction of the Earth's interior. As in Figure 2, the result is the sum of a prograde and a retrograde circular polarized wave. For frequencies ν < 0.9 the retrograde wave can be neglected, and there remains the circular propagating prograde wave where the vector of polar motion moves on a circle in anti-clockwise direction. The magnitude of m becomes:
(6) m = 14.5 po νC/[(ν - νC)2 + νD2]1/2 (for ν < 0.9)
It is a resonance curve which can be approximated at its flanks by
(7) m ≈ 14.5 po νC/|ν - νC| (for (ν - νC)2 ≫ νD2)
The maximum amplitude of m at ν = νC becomes
(8) mmax = 14.5 po νC/νD
In the range of validity of the empirical formula eq.(2), there is reasonable agreement with eq.(7). From eqs.(2) and (7), one finds the number po ∼ 0.2 hPa. The observed maximum value of m yields mmax ≥ 230 mas. Together with eq.(8), one obtains
(9) τD = 1/νD ≥ 100 years
The number of the maximum pressure amplitude is tiny, indeed. It clearly indicates the resonance amplification of Chandler wobble in the environment of the Chandler resonance frequency.
|
PLYMOUTH COLONY (or Plantation), the second permanent English settlement in North America, was founded in 1620 by settlers including a group of religious dissenters commonly referred to as the Pilgrims. Though theologically very similar to the Puritans who later founded the Massachusetts Bay Colony, the Pilgrims believed that the Church of England could not be reformed. Rather than attempting to purify the church, the Pilgrims desired a total separation.
Settlement, Founding, and Growth
One hundred and twenty-five Pilgrims, some of whom founded Plymouth, first departed England in 1608. English authorities had forced the Pilgrims to halt Separatist worship at Scrooby Manor (their residence in Nottinghamshire, England). Thus, seeking freedom of worship, they left for Holland, first passing through Amsterdam and then settling in Leyden. The Pilgrims did indeed enjoy freedom of worship in Leyden but found Holland an imperfect refuge. Most being farmers, the Pilgrims had difficulty prospering in urban Holland. More importantly, the Pilgrims feared their children were growing up in a morally degenerate atmosphere and were adopting Dutch customs and language. Seeing little chance for establishing a separate, godly society in Holland, and fearing the country's conquest by Catholic Spain, which would surely bring the horrors of the Inquisition, the Pilgrims needed a place where they would be left to worship and live as they chose.
Virginia offered such an opportunity. By 1620 the Virginia Company was in deep financial difficulty. One of many measures designed to shore up the company's financial situation was selling special patents to settlers who desired to establish private plantations within Virginia. Though under Virginia's general domain, the Pilgrims would be allowed to govern themselves. Thomas Weston and a group of London merchants who wanted to enter the colonial trade financed the Pilgrims' expedition. The two parties came to agreement in July 1620, with the Pilgrims and merchants being equal partners.
The Pilgrims sold most of their possessions in Leyden and purchased a ship—the Speedwell—to take them to Southampton, England. Weston hired another ship—the Mayflower—to join the Speedwell on the voyage to America. On 22 July 1620 a group of about thirty Pilgrims left Delfshaven, Holland, and arrived in Southampton by month's end. They met the Mayflower, which carried about seventy non-Separatists hired by Weston to journey to America as laborers. After a great deal of trouble with the Speedwell, the ship had to be abandoned, and only the Mayflower left Plymouth, England, for America on 16 September 1620. The overcrowded and poorly provisioned ship carried 101 people (35 from Leyden, 66 from London/Southampton) on a sixty-five day passage. The travelers sighted Cape Cod in November and quickly realized they were not arriving in Virginia. Prevented from turning south by the rocky coast and failing winds, the voyagers agreed to settle in the north. Exploring parties were sent into Plymouth harbor in the first weeks of December, and the Mayflower finally dropped anchor there on 26 December 1620. The weary, sickly passengers gradually came ashore to build what would become Plymouth Colony.
The winter was not particularly harsh, but the voyage left the passengers malnourished and susceptible to disease. Half of the passengers died during the first winter, but the surviving colonists, greatly aided by a plundered supply of Indian corn, were still able to establish a stable settlement. The 1617–1619 contagion brought by English fishermen and traders had greatly weakened the local Indian populace, so the Pilgrims initially faced little threat from native peoples. Plymouth town was, in fact, conveniently built on cleared area that had once been an Indian cornfield. The colonists built two rows of plank houses with enclosed gardens on "Leyden Street." Eventually the governor's house and a wooden stockade were erected. At the hill's summit, the settlers built a flat house to serve as the meeting or worship house.
Migration from England allowed the colony to grow, albeit slowly. In 1624 Plymouth Colony's population stood at 124. By 1637 it reached 549. By 1643 settlers had founded nine additional towns. Compared to its neighbor Massachusetts Bay, Plymouth Colony grew very modestly, reaching a population of only about 7,000 by 1691.
Government and Politics
Since the Pilgrims did not settle in Virginia, their patent was worthless, and they established Plymouth without any legal underpinning. Needing to formulate some kind of legal frame for the colony's government, the Pilgrims crafted the Mayflower Compact, in which the signers agreed to institute colonial self-government. The ship's free adult men signed the compact on 11 November 1620 before the settlers went ashore. They agreed to establish a civil government based upon congregational church compact government, in which freemen elected the governor and his assistants, just as congregational church members chose their own ministers.
As the colonists spread out and founded new towns, the system needed modification. Having meetings of all freemen (most adult men) in Plymouth town to elect officials became impractical. Starting in 1638, assemblies of freemen in individual towns chose deputies for a "General Court." William Bradford dominated political life in Plymouth for a generation, being elected thirty times between 1621 and 1656, but the governor's power lessened as the General Court became a true representative assembly. The General Court became a powerful legislature, with sole authority to levy taxes, declare war, and define voter qualifications. Plymouth, however, never received a legal charter from the crown, and based its existence as a self-governing entity entirely on the Mayflower Compact and the two patents issued by the Council for New England in 1621 and 1630, the latter defining the colony's physical boundaries.
Economy and Society
Plymouth was intended for family settlement and commerce, not staple production or resource extraction like many other colonies. The Pilgrims, bound together by their faith and social covenant, envisioned building a self-sustaining agricultural community that would be a refuge for Separatist dissenters. Thus life in Plymouth revolved around family and religion. Every person had a place and set of duties according to his or her position within the colony and family, and was expected to live according to God's law. Those who did not, or those who openly challenged Separatist religious doctrine, were severely punished or driven from the colony entirely.
Small, family farms remained at the heart of Plymouth's economy throughout its history. Land was divided fairly evenly, with each colonist initially receiving 100 acres of land, with 1,500 acres reserved for common use. Apart from home plots, acreage was initially assigned on a yearly basis. When Pilgrim leaders broke with their London merchant partners in 1627, every man was assigned a permanent, private allotment. The venture's assets and debts were divided among the Pilgrim colonists, with single men receiving one share (twenty acres and livestock) and heads of families receiving one share per family member. Farming proved productive enough to make the colony essentially self-sufficient in food production by 1624. The fur trade (initially run by government monopoly) proved very profitable, and allowed the colony to pay off its debt to the London merchants.
The colonists were extremely vulnerable during the first winter, and could have been annihilated had the Indians attacked. The first face-to-face meeting, however, was peaceful. In March 1620 an English-speaking Wampanoag—Samoset—approached Plymouth, and provided useful information about local geography and peoples. On 22 March 1621 Pilgrim leaders met with the Wampanoag chief Massasoit, who was in need of allies, and agreed to a mutual defense treaty. By the late 1630s, however,
the New England colonies (especially Massachusetts) were rapidly expanding, and Indian tribes were increasingly encroached upon. English encroachments in the Connecticut River valley led to the bloody Pequot War in 1637. Plymouth officially condemned Massachusetts's harsh actions against the Pequots, but still joined with that colony and Connecticut in forming the New England Confederation in 1643. The three colonies allied for mutual defense in the wake of massive, rumored Indian conspiracies, but were undoubtedly defending their often aggressive expansion at the Indians' expense.
The last great Indian war in seventeenth-century New England—King Philip's or Metacom's War—was a terrible, bloody affair, resulting in attacks on fifty-two English towns. Metacom (called King Philip by the English) was Massasoit's son, and formed a confederation of Indians to destroy English power. His efforts became intensely focused after he was forced to sign a humiliating treaty with Plymouth in 1671. Plymouth's execution of three Wampanoag Indians in 1675 sparked the war, which started with an attack on several Plymouth villages on 25 June 1675. Intercolonial military cooperation prevented Metacom's immediate victory, but disease and food shortages ultimately prevented him from winning a war of attrition. By the summer of 1676, English forces had rounded up and executed the Indian leaders, selling hundreds more into slavery in the West Indies.
Metacom's War piqued the crown's already growing interest in the New England colonies, and thereafter it set out to bring them directly under royal control. Massachusetts's charter was revoked in 1684, and in 1686 James II consolidated all of New England, plus New York and New Jersey, into one viceroyalty known as the "Dominion of New England." Assemblies were abolished, the mercantile Navigation Acts enforced, and Puritan domination was broken. Hope for self-government was revived in 1688–1689, when Protestant English parliamentarians drove the Catholic James II from power. William III and Mary II (both Protestants) succeeded James by act of Parliament. Massachusetts's leaders followed suit and ousted the Dominion's governor. The new monarchs had no great interest in consolidating the colonies, and thus left the Dominion for dead. The crown issued a new charter for Massachusetts in 1691, but denied the Puritans exclusive government control. Plymouth, by now wholly over-shadowed by Massachusetts, failed to obtain its own charter, and was absorbed by Massachusetts in 1691, thus ending the colony's seventy-year history as an independent province.
Bradford, William. History of Plymouth Plantation, 1620–1647. Edited by Samuel Eliot Morison. New York: Russell and Russell, 1968.
Deetz, James, and Patricia Scott Deetz. The Times of Their Lives: Life, Love, and Death in Plymouth Colony. New York: W. H. Freeman, 2000.
Demos, John. A Little Commonwealth: Family Life in Plymouth Colony. 2d ed. Oxford: Oxford University Press, 2000.
Johnson, Richard R. Adjustment to Empire: The New England Colonies, 1675–1715. New Brunswick, N.J.: Rutgers University Press, 1981.
Langdon, George D. Pilgrim Colony: A History of New Plymouth, 1620–1691. New Haven, Conn.: Yale University Press, 1966.
Miller, Perry. The New England Mind: The Seventeenth Century. Cambridge, Mass.: Harvard University Press, 1954.
Nash, Gary. Red, White, and Black: The Peoples of Early North America. 4th ed. Upper Saddle River, N.J.: Prentice Hall, 2000.
Shurtleff, Nathaniel B., and David Pulsifer, eds. Records of the Colony of New Plymouth in New England. 12 vols. 1855. Reprint, New York: AMS Press, 1968.
Vaughan, Alden T. New England Frontier: Puritans and Indians, a1620–1675. 3d ed. Norman: University of Oklahoma Press, 1995.
"Plymouth Colony." Dictionary of American History. 2003. Encyclopedia.com. (September 28, 2016). http://www.encyclopedia.com/doc/1G2-3401803295.html
"Plymouth Colony." Dictionary of American History. 2003. Retrieved September 28, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1G2-3401803295.html
Plymouth Colony, settlement made by the Pilgrims on the coast of Massachusetts in 1620.
Previous attempts at colonization in America (1606, 1607–8) by the Plymouth Company, chartered in 1606 along with the London Company (see Virginia Company), were unsuccessful and resulted in the company's inactivation for a number of years. In 1620 the Plymouth Company, reorganized as the Council for New England, secured a new charter from King James I, granting it all the territory from lat. 40° N to lat. 48° N and from sea to sea. Also in 1620 the Pilgrims, having secured a patent granting them colonization privileges in the territory of the London Company, left Leiden and proceeded to Southampton, where the Mayflower was fitting out for Virginia.
The Mayflower sailed from Plymouth, England, and in Nov., 1620, sighted the coast of Cape Cod instead of Virginia. In December, after five weeks spent in exploring the coast, the ship finally anchored in Plymouth harbor, and the Pilgrims established a settlement. As the patent from the London Company was invalid in New England, the Pilgrims drew up an agreement called the Mayflower Compact, which pledged allegiance to the English king but established a form of government by the will of the majority. Patents were obtained from the Council for New England in 1621 and in 1630, but the Mayflower Compact remained the basis of the colony's government until union with Massachusetts Bay colony in 1691.
During the first winter of the colony, about half of the settlers died from scurvy and exposure, but none of the survivors chose to return with the Mayflower to England. A little corn was raised in 1621, and in October of that year the settlers celebrated the first Thanksgiving Day. However, the arrival of more colonists necessitated half rations, and it was several years before the threat of famine passed.
John Carver, the first governor, died in 1621. William Bradford then assumed the post and served, except for the five years he refused the position, until his death in 1657. A treaty made in 1621 with Massasoit, chief of the Wampanoag, resulted in 50 years of peace with that tribe. The Narragansett tribe farther west was hostile, but Bradford averted trouble from that quarter. In 1623, Capt. Miles Standish marched against the Native Americans to the northwest, who were accused of plotting to exterminate the colonists settled at Weymouth by Thomas Weston. The Native Americans were gradually pushed back and deprived of their lands.
A communistic system of labor, adopted for seven years, was abandoned in 1623 by Bradford because it was retarding agriculture, and land was parceled out to each family. A well-managed fur trade enabled the colony to liquidate (1627) its debt to the London merchants who had backed the venture. The colony, which developed into a quasi-theocracy, expanded slowly due to the infertility of the land and the lack of a staple moneymaking crop.
Expansion and Merger
After several years the colonists could no longer be restrained from settling on the more productive land to the north, and settlements such as Duxbury and Scituate were founded. With the growth of additional towns, a representative system was introduced in 1638, using the town as a unit of government and establishing the General Court, along with the governor and his council, as the lawmaking body. By the time the colony joined the New England Confederation in 1643, 10 towns had been established.
Plymouth suffered severely in King Philip's War (1675–76), and but for aid from the confederation might have been destroyed. The colony became part of the Dominion of New England under the governorship of Sir Edmund Andros. After the Glorious Revolution of 1688–89 in England, the territory that had been under Andros's authority was reorganized, and Massachusetts Bay, Plymouth, and Maine were joined (1691) in the royal colony of Massachusetts.
See N. B. Shurtleff and D. Pulsifer, ed., Records of the Colony of New Plymouth in New England (12 vol., 1855–61, repr. 1968); J. G. Palfrey, History of New England (5 vol., 1858–90, repr. 1966); L. G. Tyler, England in America, 1580–1652 (1904, repr. 1968); H. L. Osgood, The American Colonies in the Seventeenth Century (3 vol., 1904–7, repr. 1957); A. Lord, Plymouth and the Pilgrims (1920); J. T. Adams, The Founding of New England (1921, repr. 1963); C. M. Andrews, The Colonial Period of American History, Vol. I (1934, repr. 1964); G. F. Willison, Saints and Strangers (1945, rev. ed. 1965) and The Pilgrim Reader (1953); S. E. Morison, The Story of the Old Colony of New Plymouth (1956); J. Demos, Little Commonwealth (1970); J. and P. S. Deetz, The Times of Their Lives: Life, Love, and Death in Plymouth Colony (2000); N. Philbrick, Mayflower (2006).
"Plymouth Colony." The Columbia Encyclopedia, 6th ed.. 2016. Encyclopedia.com. (September 28, 2016). http://www.encyclopedia.com/doc/1E1-PlymthCol.html
"Plymouth Colony." The Columbia Encyclopedia, 6th ed.. 2016. Retrieved September 28, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1E1-PlymthCol.html
"Plymouth Colony." World Encyclopedia. 2005. Encyclopedia.com. (September 28, 2016). http://www.encyclopedia.com/doc/1O142-PlymouthColony.html
"Plymouth Colony." World Encyclopedia. 2005. Retrieved September 28, 2016 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O142-PlymouthColony.html
|
In electronics, a diode is a two-terminal electronic component with asymmetric conductance, it has low (ideally zero) resistance to current flow in one direction, and high (ideally infinite) resistance in the other. A semiconductor diode, the most common type today, is a crystalline piece of semiconductor material with a p–n junction connected to two electrical terminals. A vacuum tube diode is a vacuum tube with two electrodes, a plate (anode) and a heated cathode.
The most common function of a diode is to allow an electric current to pass in one direction (called the diode's forward direction), while blocking current in the opposite direction (the reverse direction). Thus, the diode can be viewed as an electronic version of a check valve. This unidirectional behavior is called rectification, and is used to convert alternating current to direct current, including extraction of modulation from radio signals in radio receivers—these diodes are forms of rectifiers.
However, diodes can have more complicated behavior than this simple on–off action. Semiconductor diodes begin conducting electricity only if a certain threshold voltage or cut-in voltage is present in the forward direction (a state in which the diode is said to be forward-biased). The voltage drop across a forward-biased diode varies only a little with the current, and is a function of temperature; this effect can be used as a temperature sensor or voltage reference.
Semiconductor diodes' nonlinear current–voltage characteristic can be tailored by varying the semiconductor materials and doping, introducing impurities into the materials. These are exploited in special-purpose diodes that perform many different functions. For example, diodes are used to regulate voltage (Zener diodes), to protect circuits from high voltage surges (avalanche diodes), to electronically tune radio and TV receivers (varactor diodes), to generate radio frequency oscillations (tunnel diodes, Gunn diodes, IMPATT diodes), and to produce light (light emitting diodes). Tunnel diodes exhibit negative resistance, which makes them useful in some types of circuits.
Diodes were the first semiconductor electronic devices. The discovery of crystals' rectifying abilities was made by German physicist Ferdinand Braun in 1874. The first semiconductor diodes, called cat's whisker diodes, developed around 1906, were made of mineral crystals such as galena. Today most diodes are made of silicon, but other semiconductors such as germanium are sometimes used.
Thermionic (vacuum tube) diodes and solid state (semiconductor) diodes were developed separately, at approximately the same time, in the early 1900s, as radio receiver detectors. Until the 1950s vacuum tube diodes were more often used in radios because semiconductor alternatives (Cat's Whiskers) were less stable, and because most receiving sets would have vacuum tubes for amplification that could easily have diodes included in the tube (for example the 12SQ7 double-diode triode), and vacuum tube rectifiers and gas-filled rectifiers handled some high voltage/high current rectification tasks beyond the capabilities of semiconductor diodes (such as selenium rectifiers) available at the time.
Discovery of vacuum tube diodes
In 1873, Frederick Guthrie discovered the basic principle of operation of thermionic diodes. Guthrie discovered that a positively charged electroscope could be discharged by bringing a grounded piece of white-hot metal close to it (but not actually touching it). The same did not apply to a negatively charged electroscope, indicating that the current flow was only possible in one direction.
Thomas Edison independently rediscovered the principle on February 13, 1880. At the time, Edison was investigating why the filaments of his carbon-filament light bulbs nearly always burned out at the positive-connected end. He had a special bulb made with a metal plate sealed into the glass envelope. Using this device, he confirmed that an invisible current flowed from the glowing filament through the vacuum to the metal plate, but only when the plate was connected to the positive supply.
Edison devised a circuit where his modified light bulb effectively replaced the resistor in a DC voltmeter. Edison was awarded a patent for this invention in 1884. Since there was no apparent practical use for such a device at the time, the patent application was most likely simply a precaution in case someone else did find a use for the so-called Edison effect.
About 20 years later, John Ambrose Fleming (scientific adviser to the Marconi Company and former Edison employee) realized that the Edison effect could be used as a precision radio detector. Fleming patented the first true thermionic diode, the Fleming valve, in Britain on November 16, 1904 (followed by U.S. Patent 803,684 in November 1905).
In 1874 German scientist Karl Ferdinand Braun discovered the "unilateral conduction" of crystals. Braun patented the crystal rectifier in 1899. Copper oxide and selenium rectifiers were developed for power applications in the 1930s.
Indian scientist Jagadish Chandra Bose was the first to use a crystal for detecting radio waves in 1894. He also worked with microwaves in the centimeter and also the millimeter range. The crystal detector was developed into a practical device for wireless telegraphy by Greenleaf Whittier Pickard, who invented a silicon crystal detector in 1903 and received a patent for it on November 20, 1906. Other experimenters tried a variety of other substances, of which the most widely used was the mineral galena (lead sulfide). Other substances offered slightly better performance, but galena was most widely used because it had the advantage of being cheap and easy to obtain. The crystal detector in these early crystal radio sets consisted of an adjustable wire point-contact (the so-called "cat's whisker"), which could be manually moved over the face of the crystal in order to obtain optimum signal. This troublesome device was superseded by thermionic diodes by the 1920s, but after high purity semiconductor materials became available, the crystal detector returned to dominant use with the advent of inexpensive fixed-germanium diodes in the 1950s. Bell Labs also developed a germanium diode for microwave reception, and AT&T used these in their microwave towers that criss-crossed the nation starting in the late 1940s, carrying telephone and network television signals. Bell Labs did not develop a satisfactory thermionic diode for microwave reception.
At the time of their invention, such devices were known as rectifiers. In 1919, the year tetrodes were invented, William Henry Eccles coined the term diode from the Greek roots di (from δί), meaning "two", and ode (from ὁδός), meaning "path". (However, the word diode itself, as well as triode, tetrode, penthode, hexode, was already in use as a term of multiplex telegraphy; see, for example, The telegraphic journal and electrical review, September 10, 1886, p. 252).
- power supply (half-wave or full-wave or bridge) rectifiers
- CRT (especially TV) Extra-high voltage flyback, "damper" or "booster" diodes such as the 6AU4GTA.
Thermionic diodes are thermionic-valve devices (also known as vacuum tubes, tubes, or valves), which are arrangements of electrodes surrounded by a vacuum within a glass envelope. Early examples were fairly similar in appearance to incandescent light bulbs.
In thermionic-valve diodes, a current through the heater filament indirectly heats the thermionic cathode, another internal electrode treated with a mixture of barium and strontium oxides, which are oxides of alkaline earth metals; these substances are chosen because they have a small work function. (Some valves use direct heating, in which a tungsten filament acts as both heater and cathode.) The heat causes thermionic emission of electrons into the vacuum. In forward operation, a surrounding metal electrode called the anode is positively charged so that it electrostatically attracts the emitted electrons. However, electrons are not easily released from the unheated anode surface when the voltage polarity is reversed. Hence, any reverse flow is negligible.
In a mercury-arc valve, an arc forms between a refractory conductive anode and a pool of liquid mercury acting as cathode. Such units were made with ratings up to hundreds of kilowatts, and were important in the development of HVDC power transmission. Some types of smaller thermionic rectifiers sometimes had mercury vapor fill to reduce their forward voltage drop and to increase current rating over thermionic hard-vacuum devices.
Until the development of semiconductor diodes, valve diodes were used in analog signal applications and as rectifiers in many power supplies. They rapidly ceased to be used for most purposes, an exception being some high-voltage high-current applications subject to large transient peaks, where their robustness to abuse still makes them the best choice. As of 2012[update] some enthusiasts favoured vacuum tube amplifiers for audio applications, sometimes using valve rather than semiconductor rectifiers.
The symbol used for a semiconductor diode in a circuit diagram specifies the type of diode. There are alternate symbols for some types of diodes, though the differences are minor.
Light Emitting Diode (LED)
A point-contact diode works the same as the junction diodes described below, but their construction is simpler. A block of n-type semiconductor is built, and a conducting sharp-point contact made with some group-3 metal is placed in contact with the semiconductor. Some metal migrates into the semiconductor to make a small region of p-type semiconductor near the contact. The long-popular 1N34 germanium version is still used in radio receivers as a detector and occasionally in specialized analog electronics.
Most diodes today are silicon junction diodes. A junction is formed between the p and n regions which is also called a depletion region.
p–n junction diode
A p–n junction diode is made of a crystal of semiconductor. Impurities are added to it to create a region on one side that contains negative charge carriers (electrons), called n-type semiconductor, and a region on the other side that contains positive charge carriers (holes), called p-type semiconductor. When two materials i.e. n-type and p-type are attached together, a momentary flow of electrons occur from n to p side resulting in a third region where no charge carriers are present. It is called Depletion region due to the absence of charge carriers (electrons and holes in this case). The diode's terminals are attached to each of these regions. The boundary between these two regions, called a p–n junction, is where the action of the diode takes place. The crystal allows electrons to flow from the N-type side (called the cathode) to the P-type side (called the anode), but not in the opposite direction.
A semiconductor diode's behavior in a circuit is given by its current–voltage characteristic, or I–V graph (see graph below). The shape of the curve is determined by the transport of charge carriers through the so-called depletion layer or depletion region that exists at the p–n junction between differing semiconductors. When a p–n junction is first created, conduction-band (mobile) electrons from the N-doped region diffuse into the P-doped region where there is a large population of holes (vacant places for electrons) with which the electrons "recombine". When a mobile electron recombines with a hole, both hole and electron vanish, leaving behind an immobile positively charged donor (dopant) on the N side and negatively charged acceptor (dopant) on the P side. The region around the p–n junction becomes depleted of charge carriers and thus behaves as an insulator.
However, the width of the depletion region (called the depletion width) cannot grow without limit. For each electron–hole pair that recombines, a positively charged dopant ion is left behind in the N-doped region, and a negatively charged dopant ion is left behind in the P-doped region. As recombination proceeds more ions are created, an increasing electric field develops through the depletion zone that acts to slow and then finally stop recombination. At this point, there is a "built-in" potential across the depletion zone.
If an external voltage is placed across the diode with the same polarity as the built-in potential, the depletion zone continues to act as an insulator, preventing any significant electric current flow (unless electron/hole pairs are actively being created in the junction by, for instance, light. see photodiode). This is the reverse bias phenomenon. However, if the polarity of the external voltage opposes the built-in potential, recombination can once again proceed, resulting in substantial electric current through the p–n junction (i.e. substantial numbers of electrons and holes recombine at the junction). For silicon diodes, the built-in potential is approximately 0.7 V (0.3 V for Germanium and 0.2 V for Schottky). Thus, if an external current is passed through the diode, about 0.7 V will be developed across the diode such that the P-doped region is positive with respect to the N-doped region and the diode is said to be "turned on" as it has a forward bias.
A diode's I–V characteristic can be approximated by four regions of operation.
At very large reverse bias, beyond the peak inverse voltage or PIV, a process called reverse breakdown occurs that causes a large increase in current (i.e., a large number of electrons and holes are created at, and move away from the p–n junction) that usually damages the device permanently. The avalanche diode is deliberately designed for use in the avalanche region. In the Zener diode, the concept of PIV is not applicable. A Zener diode contains a heavily doped p–n junction allowing electrons to tunnel from the valence band of the p-type material to the conduction band of the n-type material, such that the reverse voltage is "clamped" to a known value (called the Zener voltage), and avalanche does not occur. Both devices, however, do have a limit to the maximum current and power in the clamped reverse-voltage region. Also, following the end of forward conduction in any diode, there is reverse current for a short time. The device does not attain its full blocking capability until the reverse current ceases.
The second region, at reverse biases more positive than the PIV, has only a very small reverse saturation current. In the reverse bias region for a normal P–N rectifier diode, the current through the device is very low (in the µA range). However, this is temperature dependent, and at sufficiently high temperatures, a substantial amount of reverse current can be observed (mA or more).
The third region is forward but small bias, where only a small forward current is conducted.
As the potential difference is increased above an arbitrarily defined "cut-in voltage" or "on-voltage" or "diode forward voltage drop (Vd)", the diode current becomes appreciable (the level of current considered "appreciable" and the value of cut-in voltage depends on the application), and the diode presents a very low resistance. The current–voltage curve is exponential. In a normal silicon diode at rated currents, the arbitrary cut-in voltage is defined as 0.6 to 0.7 volts. The value is different for other diode types—Schottky diodes can be rated as low as 0.2 V, Germanium diodes 0.25 to 0.3 V, and red or blue light-emitting diodes (LEDs) can have values of 1.4 V and 4.0 V respectively.
At higher currents the forward voltage drop of the diode increases. A drop of 1 V to 1.5 V is typical at full rated current for power diodes.
Shockley diode equation
The Shockley ideal diode equation or the diode law (named after transistor co-inventor William Bradford Shockley) gives the I–V characteristic of an ideal diode in either forward or reverse bias (or no bias). The Shockley ideal diode equation is below, where n, the ideality factor, is equal to 1 :
- I is the diode current,
- IS is the reverse bias saturation current (or scale current),
- VD is the voltage across the diode,
- VT is the thermal voltage, and
- n is the ideality factor, also known as the quality factor or sometimes emission coefficient. The ideality factor n typically varies from 1 to 2 (though can in some cases be higher), depending on the fabrication process and semiconductor material and in many cases is assumed to be approximately equal to 1 (thus the notation n is omitted). The ideality factor does not form part of the Shockley ideal diode equation, and was added to account for imperfect junctions as observed in real transistors. By setting n = 1 above, the equation reduces to the Shockley ideal diode equation.
The thermal voltage VT is approximately 25.85 mV at 300 K, a temperature close to "room temperature" commonly used in device simulation software. At any temperature it is a known constant defined by:
The reverse saturation current, IS, is not constant for a given device, but varies with temperature; usually more significantly than VT, so that VD typically decreases as T increases.
The Shockley ideal diode equation or the diode law is derived with the assumption that the only processes giving rise to the current in the diode are drift (due to electrical field), diffusion, and thermal recombination–generation (R–G) (this equation is derived by setting n = 1 above). It also assumes that the R–G current in the depletion region is insignificant. This means that the Shockley ideal diode equation doesn't account for the processes involved in reverse breakdown and photon-assisted R–G. Additionally, it doesn't describe the "leveling off" of the I–V curve at high forward bias due to internal resistance. Introducing the ideality factor, n, accounts for recombination and generation of carriers.
Under reverse bias voltages (see Figure 5) the exponential in the diode equation is negligible, and the current is a constant (negative) reverse current value of −IS. The reverse breakdown region is not modeled by the Shockley diode equation.
For even rather small forward bias voltages (see Figure 5) the exponential is very large because the thermal voltage is very small, so the subtracted '1' in the diode equation is negligible and the forward diode current is often approximated as
The use of the diode equation in circuit problems is illustrated in the article on diode modeling.
For circuit design, a small-signal model of the diode behavior often proves useful. A specific example of diode modeling is discussed in the article on small-signal circuits.
Following the end of forward conduction in a p–n type diode, a reverse current flows for a short time. The device does not attain its blocking capability until the mobile charge in the junction is depleted.
The effect can be significant when switching large currents very quickly. A certain amount of "reverse recovery time" tr (on the order of tens of nanoseconds to a few microseconds) may be required to remove the reverse recovery charge Qr from the diode. During this recovery time, the diode can actually conduct in the reverse direction. In certain real-world cases it can be important to consider the losses incurred by this non-ideal diode effect. However, when the slew rate of the current is not so severe (e.g. Line frequency) the effect can be safely ignored. For most applications, the effect is also negligible for Schottky diodes.
The reverse current ceases abruptly when the stored charge is depleted; this abrupt stop is exploited in step recovery diodes for generation of extremely short pulses.
Types of semiconductor diode
There are several types of p–n junction diodes, which either emphasize a different physical aspect of a diode often by geometric scaling, doping level, choosing the right electrodes, are just an application of a diode in a special circuit, or are really different devices like the Gunn and laser diode and the MOSFET:
Normal (p–n) diodes, which operate as described above, are usually made of doped silicon or, more rarely, germanium. Before the development of silicon power rectifier diodes, cuprous oxide and later selenium was used; its low efficiency gave it a much higher forward voltage drop (typically 1.4 to 1.7 V per "cell", with multiple cells stacked to increase the peak inverse voltage rating in high voltage rectifiers), and required a large heat sink (often an extension of the diode's metal substrate), much larger than a silicon diode of the same current ratings would require. The vast majority of all diodes are the p–n diodes found in CMOS integrated circuits, which include two diodes per pin and many other internal diodes.
- Diodes that conduct in the reverse direction when the reverse bias voltage exceeds the breakdown voltage. These are electrically very similar to Zener diodes, and are often mistakenly called Zener diodes, but break down by a different mechanism, the avalanche effect. This occurs when the reverse electric field across the p–n junction causes a wave of ionization, reminiscent of an avalanche, leading to a large current. Avalanche diodes are designed to break down at a well-defined reverse voltage without being destroyed. The difference between the avalanche diode (which has a reverse breakdown above about 6.2 V) and the Zener is that the channel length of the former exceeds the mean free path of the electrons, so there are collisions between them on the way out. The only practical difference is that the two types have temperature coefficients of opposite polarities.
- These are a type of point-contact diode. The cat's whisker diode consists of a thin or sharpened metal wire pressed against a semiconducting crystal, typically galena or a piece of coal. The wire forms the anode and the crystal forms the cathode. Cat's whisker diodes were also called crystal diodes and found application in crystal radio receivers. Cat's whisker diodes are generally obsolete, but may be available from a few manufacturers.
- These are actually a JFET with the gate shorted to the source, and function like a two-terminal current-limiter analog to the Zener diode, which is limiting voltage. They allow a current through them to rise to a certain value, and then level off at a specific value. Also called CLDs, constant-current diodes, diode-connected transistors, or current-regulating diodes.
- These have a region of operation showing negative resistance caused by quantum tunneling, allowing amplification of signals and very simple bistable circuits. Due to the high carrier concentration, tunnel diodes are very fast, may be used at low (mK) temperatures, high magnetic fields, and in high radiation environments. Because of these properties, they are often used in spacecraft.
- These are similar to tunnel diodes in that they are made of materials such as GaAs or InP that exhibit a region of negative differential resistance. With appropriate biasing, dipole domains form and travel across the diode, allowing high frequency microwave oscillators to be built.
Light-emitting diodes (LEDs)
- In a diode formed from a direct band-gap semiconductor, such as gallium arsenide, carriers that cross the junction emit photons when they recombine with the majority carrier on the other side. Depending on the material, wavelengths (or colors) from the infrared to the near ultraviolet may be produced. The forward potential of these diodes depends on the wavelength of the emitted photons: 2.1 V corresponds to red, 4.0 V to violet. The first LEDs were red and yellow, and higher-frequency diodes have been developed over time. All LEDs produce incoherent, narrow-spectrum light; "white" LEDs are actually combinations of three LEDs of a different color, or a blue LED with a yellow scintillator coating. LEDs can also be used as low-efficiency photodiodes in signal applications. An LED may be paired with a photodiode or phototransistor in the same package, to form an opto-isolator.
- When an LED-like structure is contained in a resonant cavity formed by polishing the parallel end faces, a laser can be formed. Laser diodes are commonly used in optical storage devices and for high speed optical communication.
- This term is used both for conventional p–n diodes used to monitor temperature due to their varying forward voltage with temperature, and for Peltier heat pumps for thermoelectric heating and cooling. Peltier heat pumps may be made from semiconductor, though they do not have any rectifying junctions, they use the differing behaviour of charge carriers in N and P type semiconductor to move heat.
- All semiconductors are subject to optical charge carrier generation. This is typically an undesired effect, so most semiconductors are packaged in light blocking material. Photodiodes are intended to sense light(photodetector), so they are packaged in materials that allow light to pass, and are usually PIN (the kind of diode most sensitive to light). A photodiode can be used in solar cells, in photometry, or in optical communications. Multiple photodiodes may be packaged in a single device, either as a linear array or as a two-dimensional array. These arrays should not be confused with charge-coupled devices.
- A PIN diode has a central un-doped, or intrinsic, layer, forming a p-type/intrinsic/n-type structure. They are used as radio frequency switches and attenuators. They are also used as large volume ionizing radiation detectors and as photodetectors. PIN diodes are also used in power electronics, as their central layer can withstand high voltages. Furthermore, the PIN structure can be found in many power semiconductor devices, such as IGBTs, power MOSFETs, and thyristors.
- Schottky diodes are constructed from a metal to semiconductor contact. They have a lower forward voltage drop than p–n junction diodes. Their forward voltage drop at forward currents of about 1 mA is in the range 0.15 V to 0.45 V, which makes them useful in voltage clamping applications and prevention of transistor saturation. They can also be used as low loss rectifiers, although their reverse leakage current is in general higher than that of other diodes. Schottky diodes are majority carrier devices and so do not suffer from minority carrier storage problems that slow down many other diodes—so they have a faster reverse recovery than p–n junction diodes. They also tend to have much lower junction capacitance than p–n diodes, which provides for high switching speeds and their use in high-speed circuitry and RF devices such as switched-mode power supply, mixers, and detectors.
Super barrier diodes
- Super barrier diodes are rectifier diodes that incorporate the low forward voltage drop of the Schottky diode with the surge-handling capability and low reverse leakage current of a normal p–n junction diode.
- As a dopant, gold (or platinum) acts as recombination centers, which helps a fast recombination of minority carriers. This allows the diode to operate at signal frequencies, at the expense of a higher forward voltage drop. Gold-doped diodes are faster than other p–n diodes (but not as fast as Schottky diodes). They also have less reverse-current leakage than Schottky diodes (but not as good as other p–n diodes). A typical example is the 1N914.
Snap-off or Step recovery diodes
- The term step recovery relates to the form of the reverse recovery characteristic of these devices. After a forward current has been passing in an SRD and the current is interrupted or reversed, the reverse conduction will cease very abruptly (as in a step waveform). SRDs can, therefore, provide very fast voltage transitions by the very sudden disappearance of the charge carriers.
Stabistors or Forward Reference Diodes
- The term stabistor refers to a special type of diodes featuring extremely stable forward voltage characteristics. These devices are specially designed for low-voltage stabilization applications requiring a guaranteed voltage over a wide current range and highly stable over temperature.
- These are avalanche diodes designed specifically to protect other semiconductor devices from high-voltage transients. Their p–n junctions have a much larger cross-sectional area than those of a normal diode, allowing them to conduct large currents to ground without sustaining damage.
Varicap or varactor diodes
- These are used as voltage-controlled capacitors. These are important in PLL (phase-locked loop) and FLL (frequency-locked loop) circuits, allowing tuning circuits, such as those in television receivers, to lock quickly. They also enabled tunable oscillators in early discrete tuning of radios, where a cheap and stable, but fixed-frequency, crystal oscillator provided the reference frequency for a voltage-controlled oscillator.
- Diodes that can be made to conduct backward. This effect, called Zener breakdown, occurs at a precisely defined voltage, allowing the diode to be used as a precision voltage reference. In practical voltage reference circuits, Zener and switching diodes are connected in series and opposite directions to balance the temperature coefficient to near-zero. Some devices labeled as high-voltage Zener diodes are actually avalanche diodes (see above). Two (equivalent) Zeners in series and in reverse order, in the same package, constitute a transient absorber (or Transorb, a registered trademark). The Zener diode is named for Dr. Clarence Melvin Zener of Carnegie Mellon University, inventor of the device.
Other uses for semiconductor diodes include sensing temperature, and computing analog logarithms (see Operational amplifier applications#Logarithmic_output).
Numbering and coding schemes
The standardized 1N-series numbering EIA370 system was introduced in the US by EIA/JEDEC (Joint Electron Device Engineering Council) about 1960. Among the most popular in this series were: 1N34A/1N270 (Germanium signal), 1N914/1N4148 (Silicon signal), 1N4001-1N4007 (Silicon 1A power rectifier) and 1N54xx (Silicon 3A power rectifier)
The JIS semiconductor designation system has all semiconductor diode designations starting with "1S".
The European Pro Electron coding system for active components was introduced in 1966 and comprises two letters followed by the part code. The first letter represents the semiconductor material used for the component (A = Germanium and B = Silicon) and the second letter represents the general function of the part (for diodes: A = low-power/signal, B = Variable capacitance, X = Multiplier, Y = Rectifier and Z = Voltage reference), for example:
- AA-series germanium low-power/signal diodes (e.g.: AA119)
- BA-series silicon low-power/signal diodes (e.g.: BAT18 Silicon RF Switching Diode)
- BY-series silicon rectifier diodes (e.g.: BY127 1250V, 1A rectifier diode)
- BZ-series silicon Zener diodes (e.g.: BZY88C4V7 4.7V Zener diode)
Other common numbering / coding systems (generally manufacturer-driven) include:
- GD-series germanium diodes (e.g.: GD9) – this is a very old coding system
- OA-series germanium diodes (e.g.: OA47) – a coding sequence developed by Mullard, a UK company
As well as these common codes, many manufacturers or organisations have their own systems too – for example:
- HP diode 1901-0044 = JEDEC 1N4148
- UK military diode CV448 = Mullard type OA81 = GEC type GEX23
In optics, an equivalent device for the diode but with laser light would be the Optical isolator, also known as an Optical Diode, that allows light to only pass in one direction. It uses a Faraday rotator as the main component.
The first use for the diode was the demodulation of amplitude modulated (AM) radio broadcasts. The history of this discovery is treated in depth in the radio article. In summary, an AM signal consists of alternating positive and negative peaks of a radio carrier wave, whose amplitude or envelope is proportional to the original audio signal. The diode (originally a crystal diode) rectifies the AM radio frequency signal, leaving only the positive peaks of the carrier wave. The audio is then extracted from the rectified carrier wave using a simple filter and fed into an audio amplifier or transducer, which generates sound waves.
Rectifiers are constructed from diodes, where they are used to convert alternating current (AC) electricity into direct current (DC). Automotive alternators are a common example, where the diode, which rectifies the AC into DC, provides better performance than the commutator or earlier, dynamo. Similarly, diodes are also used in Cockcroft–Walton voltage multipliers to convert AC into higher DC voltages.
Diodes are frequently used to conduct damaging high voltages away from sensitive electronic devices. They are usually reverse-biased (non-conducting) under normal circumstances. When the voltage rises above the normal range, the diodes become forward-biased (conducting). For example, diodes are used in (stepper motor and H-bridge) motor controller and relay circuits to de-energize coils rapidly without the damaging voltage spikes that would otherwise occur. (Any diode used in such an application is called a flyback diode). Many integrated circuits also incorporate diodes on the connection pins to prevent external voltages from damaging their sensitive transistors. Specialized diodes are used to protect from over-voltages at higher power (see Diode types above).
Ionizing radiation detectors
In addition to light, mentioned above, semiconductor diodes are sensitive to more energetic radiation. In electronics, cosmic rays and other sources of ionizing radiation cause noise pulses and single and multiple bit errors. This effect is sometimes exploited by particle detectors to detect radiation. A single particle of radiation, with thousands or millions of electron volts of energy, generates many charge carrier pairs, as its energy is deposited in the semiconductor material. If the depletion layer is large enough to catch the whole shower or to stop a heavy particle, a fairly accurate measurement of the particle's energy can be made, simply by measuring the charge conducted and without the complexity of a magnetic spectrometer, etc. These semiconductor radiation detectors need efficient and uniform charge collection and low leakage current. They are often cooled by liquid nitrogen. For longer-range (about a centimetre) particles, they need a very large depletion depth and large area. For short-range particles, they need any contact or un-depleted semiconductor on at least one surface to be very thin. The back-bias voltages are near breakdown (around a thousand volts per centimetre). Germanium and silicon are common materials. Some of these detectors sense position as well as energy. They have a finite life, especially when detecting heavy particles, because of radiation damage. Silicon and germanium are quite different in their ability to convert gamma rays to electron showers.
Semiconductor detectors for high-energy particles are used in large numbers. Because of energy loss fluctuations, accurate measurement of the energy deposited is of less use.
A diode can be used as a temperature measuring device, since the forward voltage drop across the diode depends on temperature, as in a silicon bandgap temperature sensor. From the Shockley ideal diode equation given above, it might appear that the voltage has a positive temperature coefficient (at a constant current), but usually the variation of the reverse saturation current term is more significant than the variation in the thermal voltage term. Most diodes therefore have a negative temperature coefficient, typically −2 mV/˚C for silicon diodes at room temperature. This is approximately linear for temperatures above about 20 kelvins. Some graphs are given for: 1N400x series, and CY7 cryogenic temperature sensor.
Diodes will prevent currents in unintended directions. To supply power to an electrical circuit during a power failure, the circuit can draw current from a battery. An uninterruptible power supply may use diodes in this way to ensure that current is only drawn from the battery when necessary. Likewise, small boats typically have two circuits each with their own battery/batteries: one used for engine starting; one used for domestics. Normally, both are charged from a single alternator, and a heavy-duty split-charge diode is used to prevent the higher-charge battery (typically the engine battery) from discharging through the lower-charge battery when the alternator is not running.
Diodes are also used in electronic musical keyboards. To reduce the amount of wiring needed in electronic musical keyboards, these instruments often use keyboard matrix circuits. The keyboard controller scans the rows and columns to determine which note the player has pressed. The problem with matrix circuits is that, when several notes are pressed at once, the current can flow backwards through the circuit and trigger "phantom keys" that cause "ghost" notes to play. To avoid triggering unwanted notes, most keyboard matrix circuits have diodes soldered with the switch under each key of the musical keyboard. The same principle is also used for the switch matrix in solid-state pinball machines.
Two-terminal nonlinear devices
Many other two-terminal nonlinear devices exist, for example a neon lamp has two terminals in a glass envelope and has interesting and useful nonlinear properties. Lamps including arc-discharge lamps, incandescent lamps, fluorescent lamps and mercury vapor lamps have two terminals and display nonlinear current–voltage characteristics.
- Tooley, Mike (2012). Electronic Circuits: Fundamentals and Applications, 3rd Ed.. Routlege. p. 81. ISBN 1-136-40731-6.
- Lowe, Doug (2013). "Electronics Components: Diodes". Electronics All-In-One Desk Reference For Dummies. John Wiley & Sons. Retrieved January 4, 2013.
- Crecraft, David; Stephen Gergely (2002). Analog Electronics: Circuits, Systems and Signal Processing. Butterworth-Heinemann. p. 110. ISBN 0-7506-5095-8.
- Horowitz, Paul; Winfield Hill (1989). The Art of Electronics, 2nd Ed.. London: Cambridge University Press. p. 44. ISBN 0-521-37095-7.
- "Physical Explanation – General Semiconductors". 2010-05-25. Retrieved 2010-08-06.
- "The Constituents of Semiconductor Components". 2010-05-25. Retrieved 2010-08-06.
- 1928 Nobel Lecture: Owen W. Richardson, "Thermionic phenomena and the laws which govern them," December 12, 1929
- Thomas A. Edison "Electrical Meter" U.S. Patent 307,030 Issue date: Oct 21, 1884
- "Road to the Transistor". Jmargolin.com. Retrieved 2008-09-22.
- Historical lecture on Karl Braun
- "Diode". Encyclobeamia.solarbotics.net. Retrieved 2010-08-06.
- Emerson, D. T. (Dec. 1997). "The work of Jagadish Chandra Bose: 100 years of mm wave research". IEEE Transactions on Microwave Theory and Techniques 45 (12): 2267–2273. Bibcode:1997ITMTT..45.2267E. doi:10.1109/22.643830. Retrieved 2010-01-19.
- Sarkar, Tapan K. (2006). History of wireless. USA: John Wiley and Sons. pp. 94, 291–308. ISBN 0-471-71814-9,.
- U.S. Patent 836,531
- "Electronic Valve - AWV,Diode, Type 6AU5GTA". Museum Victoria. Retrieved 9 January 2013.
- citation needed
- Current regulator diodes
- Jonscher, A. K. The physics of the tunnel diode. British Journal of Applied Physics 12 (Dec. 1961), 654–659.
- Dowdey, J. E., and Travis, C. M. An analysis of steady-state nuclear radiation damage of tunnel diodes. IRE Transactions on Nuclear Science 11, 5 (November 1964), 55–59.
- Classification of components
- "Component Construction". 2010-05-25. Retrieved 2010-08-06.
- Component Construction
- "Physics and Technology". 2010-05-25. Retrieved 2010-08-06.
- Fast Recovery Epitaxial Diodes (FRED) Characteristics - Applications - Examples
- S. M. Sze, Modern Semiconductor Device Physics, Wiley Interscience, ISBN 0-471-15237-4
- Protecting Low Current Loads in Harsh Electrical Environments
- "About JEDEC". Jedec.org. Retrieved 2008-09-22.
- "EDAboard.com". News.elektroda.net. 2010-06-10. Retrieved 2010-08-06.
- I.D.E.A. "Transistor Museum Construction Projects Point Contact Germanium Western Electric Vintage Historic Semiconductors Photos Alloy Junction Oral History". Semiconductormuseum.com. Retrieved 2008-09-22.
- John Ambrose Fleming (1919). The Principles of Electric Wave Telegraphy and Telephony. London: Longmans, Green. p. 550.
- Wintrich, Arendt; Nicolai, Ulrich; Tursky, Werner; Reimann, Tobias (2011). Application Manual 2011 (PDF) (2nd ed.). Nuremberg: Semikron. ISBN 978-3-938843-66-6.
- Diodes and Rectifiers - Chapter on All About Circuits
- Structure and Functional Behavior of PIN Diodes - PowerGuru
Interactive and animations
- Interactive Explanation of Semiconductor Diode, University of Cambridge
- Schottky Diode Flash Tutorial Animation
|
There are wide varieties of economic inequality, most notably measured using the distribution of income (the amount of money people are paid) and the distribution of wealth (the amount of wealth people own). Besides economic inequality between countries or states, there are important types of economic inequality between different groups of people.
Important types of economic measurements focus on wealth, income, and consumption. There are many methods for measuring economic inequality, with the Gini coefficient being a widely used one. Another type of measure is the Inequality-adjusted Human Development Index, which is a statistic composite index that takes inequality into account. Important concepts of equality include equity, equality of outcome, and equality of opportunity.
Research suggests that greater inequality hinders economic growth, with land and human capital inequality reducing growth more than inequality of income. Whereas globalization has reduced global inequality (between nations), it has increased inequality within nations. Research has generally linked economic inequality to political instability, including democratic breakdown and civil conflict.
In 1820, the ratio between the income of the top and bottom 20 percent of the world's population was three to one. By 1991, it was eighty-six to one. A 2011 study titled "Divided we Stand: Why Inequality Keeps Rising" by the Organisation for Economic Co-operation and Development (OECD) sought to explain the causes for this rising inequality by investigating economic inequality in OECD countries; it concluded that the following factors played a role:
The study made the following conclusions about the level of economic inequality:
A 2011 OECD study investigated economic inequality in Argentina, Brazil, China, India, Indonesia, Russia and South Africa. It concluded that key sources of inequality in these countries include "a large, persistent informal sector, widespread regional divides (e.g. urban-rural), gaps in access to education, and barriers to employment and career progression for women."
A study by the World Institute for Development Economics Research at United Nations University reports that the richest 1% of adults alone owned 40% of global assets in the year 2000. The three richest people in the world possess more financial assets than the lowest 48 nations combined. The combined wealth of the "10 million dollar millionaires" grew to nearly $41 trillion in 2008. Oxfam's 2021 report on global inequality said that the COVID-19 pandemic has increased economic inequality substantially; the wealthiest people across the globe were impacted the least by the pandemic and their fortunes recovered quickest, with billionaires seeing their wealth increase by $3.9 trillion, while at the same time those living on less than $5.50 a day likely increased by 500 million. The report also emphasized that the wealthiest 1% are by far the biggest polluters and main drivers of climate change, and said that government policy should focus on fighting both inequality and climate change simultaneously.
According to PolitiFact, the top 400 richest Americans "have more wealth than half of all Americans combined." According to The New York Times on July 22, 2014, the "richest 1 percent in the United States now own more wealth than the bottom 90 percent". Inherited wealth may help explain why many Americans who have become rich may have had a "substantial head start". In September 2012, according to the Institute for Policy Studies (IPS), "over 60 percent" of the Forbes richest 400 Americans "grew up in substantial privilege". A 2017 report by the IPS said that three individuals, Jeff Bezos, Bill Gates and Warren Buffett, own as much wealth as the bottom half of the population, or 160 million people, and that the growing disparity between the wealthy and the poor has created a "moral crisis", noting that "we have not witnessed such extreme levels of concentrated wealth and power since the first gilded age a century ago." In 2016, the world's billionaires increased their combined global wealth to a record $6 trillion. In 2017, they increased their collective wealth to 8.9 trillion. In 2018, U.S. income inequality reached the highest level ever recorded by the Census Bureau.
The existing data and estimates suggest a large increase in international (and more generally inter-macroregional) components between 1820 and 1960. It might have slightly decreased since that time at the expense of increasing inequality within countries. The United Nations Development Programme in 2014 asserted that greater investments in social security, jobs and laws that protect vulnerable populations are necessary to prevent widening income inequality.
There is a significant difference in the measured wealth distribution and the public's understanding of wealth distribution. Michael Norton of the Harvard Business School and Dan Ariely of the Department of Psychology at Duke University found this to be true in their research conducted in 2011. The actual wealth going to the top quintile in 2011 was around 84%, whereas the average amount of wealth that the general public estimated to go to the top quintile was around 58%.
According to a 2020 study, global earnings inequality has decreased substantially since 1970. During the 2000s and 2010s, the share of earnings by the world's poorest half doubled. Two researchers claim that global income inequality is decreasing due to strong economic growth in developing countries. According to a January 2020 report by the United Nations Department of Economic and Social Affairs, economic inequality between states had declined, but intra-state inequality has increased for 70% of the world population over the period 1990–2015. In 2015, the OECD reported in 2015 that income inequality is higher than it has ever been within OECD member nations and is at increased levels in many emerging economies. According to a June 2015 report by the International Monetary Fund:
Widening income inequality is the defining challenge of our time. In advanced economies, the gap between the rich and poor is at its highest level in decades. Inequality trends have been more mixed in emerging markets and developing countries (EMDCs), with some countries experiencing declining inequality, but pervasive inequities in access to education, health care, and finance remain.
In October 2017, the IMF warned that inequality within nations, in spite of global inequality falling in recent decades, has risen so sharply that it threatens economic growth and could result in further political polarization. The Fund's Fiscal Monitor report said that "progressive taxation and transfers are key components of efficient fiscal redistribution." In October 2018 Oxfam published a Reducing Inequality Index which measured social spending, tax and workers' rights to show which countries were best at closing the gap between the rich and the poor.
Main article: List of countries by wealth per adult
The following table shows information about individual wealth distribution in different countries from a 2018 report by Crédit Suisse.
Main article: List of countries by income equality
A Gini index value above 50 is considered high; countries including Brazil, Colombia, South Africa, Botswana, and Honduras can be found in this category. A Gini index value of 30 or above is considered medium; countries including Vietnam, Mexico, Poland, the United States, Argentina, Russia and Uruguay can be found in this category. A Gini index value lower than 30 is considered low; countries including Austria, Germany, Denmark, Norway, Slovenia, Sweden, and Ukraine can be found in this category.
There are various reasons for economic inequality within societies, including both global market functions (such as trade, development, and regulation) as well as social factors (including gender, race, and education). Recent growth in overall income inequality, at least within the OECD countries, has been driven mostly by increasing inequality in wages and salaries.
Economist Thomas Piketty argues that widening economic disparity is an inevitable phenomenon of free market capitalism when the rate of return of capital (r) is greater than the rate of growth of the economy (g).
A major cause of economic inequality within modern market economies is the determination of wages by the market. Where competition is imperfect; information unevenly distributed; opportunities to acquire education and skills unequal; market failure results. Since many such imperfect conditions exist in virtually every market, there is in fact little presumption that markets are in general efficient. This means that there is an enormous potential role for government to correct such market failures.
Main article: Thomas Malthus
Another cause is the rate at which income is taxed coupled with the progressivity of the tax system. A progressive tax is a tax by which the tax rate increases as the taxable base amount increases. In a progressive tax system, the level of the top tax rate will often have a direct impact on the level of inequality within a society, either increasing it or decreasing it, provided that income does not change as a result of the change in tax regime. Additionally, steeper tax progressivity applied to social spending can result in a more equal distribution of income across the board. Tax credits such as the Earned Income Tax Credit in the US can also decrease income inequality. The difference between the Gini index for an income distribution before taxation and the Gini index after taxation is an indicator for the effects of such taxation.
Main article: Education
An important factor in the creation of inequality is variation in individuals' access to education. Education, especially in an area where there is a high demand for workers, creates high wages for those with this education. However, increases in education first increase and then decrease growth as well as income inequality. As a result, those who are unable to afford an education, or choose not to pursue optional education, generally receive much lower wages. The justification for this is that a lack of education leads directly to lower incomes, and thus lower aggregate saving and investment. Conversely, quality education raises incomes and promotes growth because it helps to unleash the productive potential of the poor.
John Schmitt and Ben Zipperer (2006) of the CEPR point to economic liberalism and the reduction of business regulation along with the decline of union membership as one of the causes of economic inequality. In an analysis of the effects of intensive Anglo-American liberal policies in comparison to continental European liberalism, where unions have remained strong, they concluded "The U.S. economic and social model is associated with substantial levels of social exclusion, including high levels of income inequality, high relative and absolute poverty rates, poor and unequal educational outcomes, poor health outcomes, and high rates of crime and incarceration. At the same time, the available evidence provides little support for the view that U.S.-style labor market flexibility dramatically improves labor-market outcomes. Despite popular prejudices to the contrary, the U.S. economy consistently affords a lower level of economic mobility than all the continental European countries for which data is available."
More recently, the International Monetary Fund has published studies which found that the decline of unionization in many advanced economies and the establishment of neoliberal economics have fueled rising income inequality.
The growth in importance of information technology has been credited with increasing income inequality. Technology has been called "the main driver of the recent increases in inequality" by Erik Brynjolfsson, of MIT. In arguing against this explanation, Jonathan Rothwell notes that if technological advancement is measured by high rates of invention, there is a negative correlation between it and inequality. Countries with high invention rates — "as measured by patent applications filed under the Patent Cooperation Treaty" — exhibit lower inequality than those with less. In one country, the United States, "salaries of engineers and software developers rarely reach" above $390,000/year (the lower limit for the top 1% earners).
Trade liberalization may shift economic inequality from a global to a domestic scale. When rich countries trade with poor countries, the low-skilled workers in the rich countries may see reduced wages as a result of the competition, while low-skilled workers in the poor countries may see increased wages. Trade economist Paul Krugman estimates that trade liberalisation has had a measurable effect on the rising inequality in the United States. He attributes this trend to increased trade with poor countries and the fragmentation of the means of production, resulting in low skilled jobs becoming more tradeable.
Anthropologist Jason Hickel contends that globalization and "structural adjustment" set off the "race to the bottom", a significant driver of surging global inequality. Another driver Hickel mentions is the debt system which advanced the need for structural adjustment in the first place.
Main article: Gender inequality
In many countries, there is a gender pay gap in favor of males in the labor market. Several factors other than discrimination contribute to this gap. On average, women are more likely than men to consider factors other than pay when looking for work, and may be less willing to travel or relocate. Thomas Sowell, in his book Knowledge and Decisions, claims that this difference is due to women not taking jobs due to marriage or pregnancy. A U.S. Census's report stated that in US once other factors are accounted for there is still a difference in earnings between women and men.
Main article: Social inequality
There is also a globally recognized disparity in the wealth, income, and economic welfare of people of different races. In many nations, data exists to suggest that members of certain racial demographics experience lower wages, fewer opportunities for career and educational advancement, and intergenerational wealth gaps. Studies have uncovered the emergence of what is called "ethnic capital", by which people belonging to a race that has experienced discrimination are born into a disadvantaged family from the beginning and therefore have less resources and opportunities at their disposal. The universal lack of education, technical and cognitive skills, and inheritable wealth within a particular race is often passed down between generations, compounding in effect to make escaping these racialized cycles of poverty increasingly difficult. Additionally, ethnic groups that experience significant disparities are often also minorities, at least in representation though often in number as well, in the nations where they experience the harshest disadvantage. As a result, they are often segregated either by government policy or social stratification, leading to ethnic communities that experience widespread gaps in wealth and aid.
As a general rule, races which have been historically and systematically colonized (typically indigenous ethnicities) continue to experience lower levels of financial stability in the present day. The global South is considered to be particularly victimized by this phenomenon, though the exact socioeconomic manifestations change across different regions.
Even in economically developed societies with high levels of modernization such as may be found in Western Europe, North America, and Australia, minority ethnic groups and immigrant populations in particular experience financial discrimination. While the progression of civil rights movements and justice reform has improved access to education and other economic opportunities in politically advanced nations, racial income and wealth disparity still prove significant. In the United States for example, which serves as a good basis for understanding racial discrimination in the West due to the amount of research attention it receives, a survey of African-American populations show that they are more likely to drop out of high school and college, are typically employed for fewer hours at lower wages, how lower than average intergenerational wealth, and are more likely to use welfare as young adults than their white counterparts. Mexican-Americans, while suffering less debilitating socioeconomic factors than black Americans, experience deficiencies in the same areas when compared to whites and have not assimilated financially to the level of stability experienced by white Americans as a whole. These experiences are the effects of the measured disparity due to race in countries like the US, where studies show that in comparison to whites, blacks suffer from drastically lower levels of upward mobility, higher levels of downward mobility, and poverty that is more easily transmitted to offspring as a result of the disadvantage stemming from the era of slavery and post-slavery racism that has been passed through racial generations to the present. These are lasting financial inequalities that apply in varying magnitudes to most non-white populations in nations such as the US, the UK, France, Spain, Australia, etc.
In the countries of the Caribbean, Central America, and South America, many ethnicities continue to deal with the effects fo European colonization, and in general nonwhites tend to be noticeably poorer than whites in this region. In many countries with significant populations of indigenous races and those of Afro-descent (such as Mexico, Colombia, Chile, etc.) income levels can be roughly half as high as those experiences by white demographics, and this inequity is accompanied by systematically unequal access to education, career opportunities, and poverty relief. This region of the world, apart from urbanizing areas like Brazil and Costa Rica, continues to be understudied and often the racial disparity is denied by Latin Americans who consider themselves to be living in post-racial and post-colonial societies far removed from intense social and economic stratification despite the evidence to the contrary.
African countries, too, continue to deal with the effects of the Trans-Atlantic Slave Trade, which set back economic development as a whole for blacks of African citizenship more than any other region. The degree to which colonizers stratified their holdings on the continent on the basis of race has had a direct correlation in the magnitude of disparity experienced by nonwhites in the nations that eventually rose from their colonial status. Former French colonies, for example, see much higher rates of income inequality between whites and nonwhites as a result of the rigid hierarchy imposed by the French who lived in Africa at the time. Another prime example is found in South Africa, which, still reeling from the socioeconomic impacts of Apartheid, experiences some of the highest racial income and wealth inequality in all of Africa. In these and other countries like Nigeria, Zimbabwe, and Sierra Leone, movements of civil reform have initially led to improved access to financial advancement opportunities, but data actually shows that for nonwhites this progress is either stalling or erasing itself in the newest generation of blacks that seek education and improved transgenerational wealth. The economic status of one's parents continues to define and predict the financial futures of African and minority ethnic groups.
Asian regions and countries such as China, the Middle East, and Central Asia have been vastly understudied in terms of racial disparity, but even here the effects of Western colonization provide similar results to those found in other parts of the globe. Additionally, cultural and historical practices such as the caste system in India leave their marks as well. While the disparity is greatly improving in the case of India, there still exists social stratification between peoples of lighter and darker skin tones that cumulatively result in income and wealth inequality, manifesting in many of the same poverty traps seen elsewhere.
Main article: Kuznets curve
Economist Simon Kuznets argued that levels of economic inequality are in large part the result of stages of development. According to Kuznets, countries with low levels of development have relatively equal distributions of wealth. As a country develops, it acquires more capital, which leads to the owners of this capital having more wealth and income and introducing inequality. Eventually, through various possible redistribution mechanisms such as social welfare programs, more developed countries move back to lower levels of inequality.
Andranik Tangian argues that the growing productivity due to advanced technologies results in increasing wages' purchase power for most commodities, which enables employers underpay workers in "labor equivalents", maintaining nevertheless an impression of fair pay. This illusion is dismanteled by the wages' decreasing purchase power for the commodities with a significant share of hand labor. This difference between the appropriate and factual pay goes to enterprise owners and top earners, increasing the inequality.
Wealth concentration is the process by which, under certain conditions, newly created wealth concentrates in the possession of already-wealthy individuals or entities. Accordingly, those who already hold wealth have the means to invest in new sources of creating wealth or to otherwise leverage the accumulation of wealth, and thus they are the beneficiaries of the new wealth. Over time, wealth concentration can significantly contribute to the persistence of inequality within society. Thomas Piketty in his book Capital in the Twenty-First Century argues that the fundamental force for divergence is the usually greater return of capital (r) than economic growth (g), and that larger fortunes generate higher returns.
According to a 2020 study by the RAND Corporation, the top 1% of U.S. income earners have taken $50 trillion from the bottom 90% between 1975 and 2018.
Main article: Rent-seeking
Economist Joseph Stiglitz argues that rather than explaining concentrations of wealth and income, market forces should serve as a brake on such concentration, which may better be explained by the non-market force known as "rent-seeking". While the market will bid up compensation for rare and desired skills to reward wealth creation, greater productivity, etc., it will also prevent successful entrepreneurs from earning excess profits by fostering competition to cut prices, profits and large compensation. A better explainer of growing inequality, according to Stiglitz, is the use of political power generated by wealth by certain groups to shape government policies financially beneficial to them. This process, known to economists as rent-seeking, brings income not from creation of wealth but from "grabbing a larger share of the wealth that would otherwise have been produced without their effort"
Jamie Galbraith argues that countries with larger financial sectors have greater inequality, and the link is not an accident.[why?]
A 2019 study published in PNAS found that global warming plays a role in increasing economic inequality between countries, boosting economic growth in developed countries while hampering such growth in developing nations of the Global South. The study says that 25% of gap between the developed world and the developing world can be attributed to global warming.
A 2020 report by Oxfam and the Stockholm Environment Institute says that the wealthiest 10% of the global population were responsible for more than half of global carbon dioxide emissions from 1990 to 2015, which increased by 60%. According to a 2020 report by the UNEP, overconsumption by the rich is a significant driver of the climate crisis, and the wealthiest 1% of the world's population are responsible for more than double the greenhouse gas emissions of the poorest 50% combined. Inger Andersen, in the foreword to the report, said "this elite will need to reduce their footprint by a factor of 30 to stay in line with the Paris Agreement targets."
Countries with a left-leaning legislature generally have lower levels of inequality. Many factors constrain economic inequality – they may be divided into two classes: government sponsored, and market driven. The relative merits and effectiveness of each approach is a subject of debate.
Typical government initiatives to reduce economic inequality include:
Market forces outside of government intervention that can reduce economic inequality include:
Research shows that since 1300, the only periods with significant declines in wealth inequality in Europe were the Black Death and the two World Wars. Historian Walter Scheidel posits that, since the stone age, only extreme violence, catastrophes and upheaval in the form of total war, Communist revolution, pestilence and state collapse have significantly reduced inequality. He has stated that "only all-out thermonuclear war might fundamentally reset the existing distribution of resources" and that "peaceful policy reform may well prove unequal to the growing challenges ahead."
Main article: Effects of economic inequality
A lot of research has been done about the effects of economic inequality on different aspects in society:
According to Christina Starmans et al. (Nature Hum. Beh., 2017), the research literature contains no evidence on people having an aversion on inequality. In all studies analyzed, the subjects preferred fair distributions to equal distributions, in both laboratory and real-world situations. In public, researchers may loosely speak of equality instead of fairness, when referring to studies where fairness happens to coincide with equality, but in many studies fairness is carefully separated from equality and the results are univocal. Already very young children seem to prefer fairness over equality.
When people were asked, what would be the wealth of each quintile in their ideal society, they gave a 50-fold sum to the richest quintile than to the poorest quintile. The preference for inequality increases in adolescence, and so do the capabilities to favor fortune, effort and ability in the distribution.
Preference for unequal distribution has been developed to the human race possibly because it allows for better co-operation and allows a person to work with a more productive person so that both parties benefit from the co-operation. Inequality is also said to be able to solve the problems of free-riders, cheaters and ill-behaving people, although this is heavily debated. Researches demonstrate that people usually underestimate the level of actual inequality, which is also much higher than their desired level of inequality.
In many societies, such as the USSR, the distribution led to protests from wealthier landowners. In the current U.S., many feel that the distribution is unfair in being too unequal. In both cases, the cause is unfairness, not inequality, the researchers conclude.
Socialists attribute the vast disparities in wealth to the private ownership of the means of production by a class of owners, creating a situation where a small portion of the population lives off unearned property income by virtue of ownership titles in capital equipment, financial assets and corporate stock. By contrast, the vast majority of the population is dependent on income in the form of a wage or salary. In order to rectify this situation, socialists argue that the means of production should be socially owned so that income differentials would be reflective of individual contributions to the social product.
Marxist socialists ultimately predict the emergence of a communist society based on the common ownership of the means of production, where each individual citizen would have free access to the articles of consumption (From each according to his ability, to each according to his need). According to Marxist philosophy, equality in the sense of free access is essential for freeing individuals from dependent relationships, thereby allowing them to transcend alienation.
Meritocracy favors an eventual society where an individual's success is a direct function of his merit, or contribution. Economic inequality would be a natural consequence of the wide range in individual skill, talent and effort in human population. David Landes stated that the progression of Western economic development that led to the Industrial Revolution was facilitated by men advancing through their own merit rather than because of family or political connections.
Most modern social liberals, including centrist or left-of-center political groups, believe that the capitalist economic system should be fundamentally preserved, but the status quo regarding the income gap must be reformed. Social liberals favor a capitalist system with active Keynesian macroeconomic policies and progressive taxation (to even out differences in income inequality). Research indicates that people who hold liberal beliefs tend to see greater income inequality as morally wrong.
However, contemporary classical liberals and libertarians generally do not take a stance on wealth inequality, but believe in equality under the law regardless of whether it leads to unequal wealth distribution. In 1966 Ludwig von Mises, a prominent figure in the Austrian School of economic thought, explains:
The liberal champions of equality under the law were fully aware of the fact that men are born unequal and that it is precisely their inequality that generates social cooperation and civilization. Equality under the law was in their opinion not designed to correct the inexorable facts of the universe and to make natural inequality disappear. It was, on the contrary, the device to secure for the whole of mankind the maximum of benefits it can derive from it. Henceforth no man-made institutions should prevent a man from attaining that station in which he can best serve his fellow citizens.
Robert Nozick argued that government redistributes wealth by force (usually in the form of taxation), and that the ideal moral society would be one where all individuals are free from force. However, Nozick recognized that some modern economic inequalities were the result of forceful taking of property, and a certain amount of redistribution would be justified to compensate for this force but not because of the inequalities themselves. John Rawls argued in A Theory of Justice that inequalities in the distribution of wealth are only justified when they improve society as a whole, including the poorest members. Rawls does not discuss the full implications of his theory of justice. Some see Rawls's argument as a justification for capitalism since even the poorest members of society theoretically benefit from increased innovations under capitalism; others believe only a strong welfare state can satisfy Rawls's theory of justice.
Classical liberal Milton Friedman believed that if government action is taken in pursuit of economic equality then political freedom would suffer. In a famous quote, he said:
A society that puts equality before freedom will get neither. A society that puts freedom before equality will get a high degree of both.
Economist Tyler Cowen has argued that though income inequality has increased within nations, globally it has fallen over the 20 years leading up to 2014. He argues that though income inequality may make individual nations worse off, overall, the world has improved as global inequality has been reduced.
Patrick Diamond and Anthony Giddens (professors of Economics and Sociology, respectively) hold that 'pure meritocracy is incoherent because, without redistribution, one generation's successful individuals would become the next generation's embedded caste, hoarding the wealth they had accumulated'.
They also state that social justice requires redistribution of high incomes and large concentrations of wealth in a way that spreads it more widely, in order to "recognise the contribution made by all sections of the community to building the nation's wealth." (Patrick Diamond and Anthony Giddens, June 27, 2005, New Statesman)
Pope Francis stated in his Evangelii gaudium, that "as long as the problems of the poor are not radically resolved by rejecting the absolute autonomy of markets and financial speculation and by attacking the structural causes of inequality, no solution will be found for the world's problems or, for that matter, to any problems." He later declared that "inequality is the root of social evil."
When income inequality is low, aggregate demand will be relatively high, because more people who want ordinary consumer goods and services will be able to afford them, while the labor force will not be as relatively monopolized by the wealthy.
In most western democracies, the desire to eliminate or reduce economic inequality is generally associated with the political left. One practical argument in favor of reduction is the idea that economic inequality reduces social cohesion and increases social unrest, thereby weakening the society. There is evidence that this is true (see inequity aversion) and it is intuitive, at least for small face-to-face groups of people. Alberto Alesina, Rafael Di Tella, and Robert MacCulloch find that inequality negatively affects happiness in Europe but not in the United States.
It has also been argued that economic inequality invariably translates to political inequality, which further aggravates the problem. Even in cases where an increase in economic inequality makes nobody economically poorer, an increased inequality of resources is disadvantageous, as increased economic inequality can lead to a power shift due to an increased inequality in the ability to participate in democratic processes.
Further information: Capability approach
The capabilities approach – sometimes called the human development approach – looks at income inequality and poverty as form of "capability deprivation". Unlike neoliberalism, which "defines well-being as utility maximization", economic growth and income are considered a means to an end rather than the end itself. Its goal is to "wid[en] people's choices and the level of their achieved well-being" through increasing functionings (the things a person values doing), capabilities (the freedom to enjoy functionings) and agency (the ability to pursue valued goals).
When a person's capabilities are lowered, they are in some way deprived of earning as much income as they would otherwise. An old, ill man cannot earn as much as a healthy young man; gender roles and customs may prevent a woman from receiving an education or working outside the home. There may be an epidemic that causes widespread panic, or there could be rampant violence in the area that prevents people from going to work for fear of their lives. As a result, income inequality increases, and it becomes more difficult to reduce the gap without additional aid. To prevent such inequality, this approach believes it is important to have political freedom, economic facilities, social opportunities, transparency guarantees, and protective security to ensure that people aren't denied their functionings, capabilities, and agency and can thus work towards a better relevant income.
A 2011 OECD study makes a number of suggestions to its member countries, including:
Progressive taxation reduces absolute income inequality when the higher rates on higher-income individuals are paid and not evaded, and transfer payments and social safety nets result in progressive government spending. Wage ratio legislation has also been proposed as a means of reducing income inequality. The OECD asserts that public spending is vital in reducing the ever-expanding wealth gap.
The economists Emmanuel Saez and Thomas Piketty recommend much higher top marginal tax rates on the wealthy, up to 50 percent, 70 percent or even 90 percent. Ralph Nader, Jeffrey Sachs, the United Front Against Austerity, among others, call for a financial transactions tax (also known as the Robin Hood tax) to bolster the social safety net and the public sector.
The Economist wrote in December 2013: "A minimum wage, providing it is not set too high, could thus boost pay with no ill effects on jobs....America's federal minimum wage, at 38% of median income, is one of the rich world's lowest. Some studies find no harm to employment from federal or state minimum wages, others see a small one, but none finds any serious damage."
General limitations on and taxation of rent-seeking are popular across the political spectrum.
Public policy responses addressing causes and effects of income inequality in the US include: progressive tax incidence adjustments, strengthening social safety net provisions such as Aid to Families with Dependent Children, welfare, the food stamp program, Social Security, Medicare, and Medicaid, organizing community interest groups, increasing and reforming higher education subsidies, increasing infrastructure spending, and placing limits on and taxing rent-seeking.
A 2017 study in the Journal of Political Economy by Daron Acemoglu, James Robinson and Thierry Verdier argues that American "cutthroat" capitalism and inequality gives rise to technology and innovation that more "cuddly" forms of capitalism cannot. As a result, "the diversity of institutions we observe among relatively advanced countries, ranging from greater inequality and risk-taking in the United States to the more egalitarian societies supported by a strong safety net in Scandinavia, rather than reflecting differences in fundamentals between the citizens of these societies, may emerge as a mutually self-reinforcing world equilibrium. If so, in this equilibrium, 'we cannot all be like the Scandinavians,' because Scandinavian capitalism depends in part on the knowledge spillovers created by the more cutthroat American capitalism." A 2012 working paper by the same authors, making similar arguments, was challenged by Lane Kenworthy, who posited that, among other things, the Nordic countries are consistently ranked as some of the world's most innovative countries by the World Economic Forum's Global Competitiveness Index, with Sweden ranking as the most innovative nation, followed by Finland, for 2012–2013; the U.S. ranked sixth.
There are however global initiative like the United Nations Sustainable Development Goal 10 which aims to garner international efforts in reducing economic inequality considerably by 2030.
Summary - This paper develops a meta-analysis of the empirical literature that estimates the effect of inequality on growth. It covers studies published in scientific journals during 1994–2014 that examine the impact on growth of inequality in income, land, and human capital distribution. We find traces of publication bias in this literature, as authors and journals are more willing to report and publish statistically significant findings, and the results tend to follow a predictable time pattern over time according to which negative and positive effects are cyclically reported. After correcting for these two forms of publication bias, we conclude that the high degree of heterogeneity of the reported effect sizes is explained by study conditions, namely the structure of the data, the type of countries included in the sample, the inclusion of regional dummies, the concept of inequality and the definition of income. In particular, our meta-regression analysis suggests that: cross-section studies systematically report a stronger negative impact than panel data studies; the effect of inequality on growth is negative and more pronounced in less developed countries than in rich countries; the inclusion of regional dummies in the growth regression of the primary studies considerably weakens such effect; expenditure and gross income inequality tend to lead to different estimates of the effect size; land and human inequality are more pernicious to subsequent growth than income inequality is. We also find that the estimation technique, the quality of data on income distribution, and the specification of the growth regression do not significantly influence the estimation of the effect sizes. These results provide new insights into the nature of the inequality–growth relationship and offer important guidelines for policy makers.
|journal=(help) October 10, 2018 article: Global Wealth Report 2018: US and China in the lead. Report[permanent dead link]. Databook[permanent dead link]. Downloadable data sheets. See Table 3.1 (page 114) of databook for mean and median wealth by country.
|
Experiment-9: “Stirling Engine”
Object :To observe first and second law thermodynamics, reversible cycles,
isochoric and isothermal changes , gas jaws, efficiency, Stirling engine,
conversion of heat, thermal pump.
The Stirling engine is a heat engine that is vastly different from the internal-
combustion engine in your car. Invented by Robert Stirling in 1816, the Stirling
engine has the potential to be much more efficient than a gasoline or diesel
engine. But today, Stirling engines are used only in some very specialized
applications, like in submarines or auxiliary power generators for yachts, where
quiet operation is important. Although there hasn't been a successful mass-
market application for the Stirling engine, some very high-power inventors are
working on it.
A Stirling engine uses the Stirling cycle, which is unlike the cycles used in
The gasses used inside a Stirling engine never leave the engine. There are no
exhaust valves that vent high-pressure gasses, as in a gasoline or diesel engine,
and there are no explosions taking place. Because of this, Stirling engines are
The Stirling cycle uses an external heat source, which could be anything from
gasoline to solar energy to the heat produced by decaying plants. No
combustion takes place inside the cylinders of the engine.
There are hundreds of ways to put together a Stirling engine. In this article,
we'll learn about the Stirling cycle and see how two different configurations of
this engine work.
The Stirling Cycle
The key principle of a Stirling engine is that a fixed amount of a gas is sealed
inside the engine. The Stirling cycle involves a series of events that change the
pressure of the gas inside the engine, causing it to do work.
There are several properties of gasses that are critical to the operation of
If you have a fixed amount of gas in a fixed volume of space and you raise the
temperature of that gas, the pressure will increase.
If you have a fixed amount of gas and you compress it (decrease the volume of
its space), the temperature of that gas will increase.
Let's go through each part of the Stirling cycle while looking at a simplified
Stirling engine. Our simplified engine uses two cylinders. One cylinder is heated
by an external heat source (such as fire), and the other is cooled by an external
cooling source (such as ice). The gas chambers of the two cylinders are
connected, and the pistons are connected to each other mechanically by a
linkage that determines how they will move in relation to one another.
There are four parts to the Stirling cycle. The two pistons in the animation
above accomplish all of the parts of the cycle:
1. Heat is added to the gas inside the heated cylinder (left), causing pressure to
build. This forces the piston to move down. This is the part of the Stirling cycle
that does the work.
2. The left piston moves up while the right piston moves down. This pushes the
hot gas into the cooled cylinder, which quickly cools the gas to the temperature
of the cooling source, lowering its pressure. This makes it easier to compress
the gas in the next part of the cycle.
3. The piston in the cooled cylinder (right) starts to compress the gas. Heat
generated by this compression is removed by the cooling source.
4. The right piston moves up while the left piston moves down. This forces the gas
into the heated cylinder, where it quickly heats up, building pressure, at which
point the cycle repeats.
The Stirling engine only makes power during the first part of the cycle. There
are two main ways to increase the power output of a Stirling cycle:
Increase power output in stage one - In part one of the cycle, the pressure of
the heated gas pushing against the piston performs work. Increasing the
pressure during this part of the cycle will increase the power output of the
engine. One way of increasing the pressure is by increasing the temperature of
the gas. When we take a look at a two-piston Stirling engine later in this article,
we'll see how a device called a regenerator can improve the power output of
the engine by temporarily storing heat.
Decrease power usage in stage three - In part three of the cycle, the pistons
perform work on the gas, using some of the power produced in part one.
Lowering the pressure during this part of the cycle can decrease the power
used during this stage of the cycle (effectively increasing the power output of
the engine). One way to decrease the pressure is to cool the gas to a lower
This section described the ideal Stirling cycle. Actual working engines vary the
cycle slightly because of the physical limitations of their design. In the next two
sections, we'll take a look at a couple of different kinds of Stirling engines. The
displacer-type engine is probably the easiest to understand, so we'll start there.
Displacer-type Stirling Engine
Instead of having two pistons, a displacer-type engine has one piston and a
displacer. The displacer serves to control when the gas chamber is heated and
when it is cooled. This type of Stirling engine is sometimes used in classroom
demonstrations. You can even buy a kit to build one yourself!
In order to run, the engine above requires a temperature difference between
the top and the bottom of the large cylinder. In this case, the difference
between the temperature of your hand and the air around it is enough to run
The Stirling cycle
Main article: Stirling cycle
A pressure/volume graph of the idealized Stirling cycle.
The idealized or "text book" Stirling cycle consists of four thermodynamic
processes acting on the working fluid ( See diagram to right):
Points 1 to 2, Isothermal Expansion. The expansion-space and associated
heat exchanger are maintained at a constant high temperature, and the
gas undergoes near-isothermal expansion absorbing heat from the hot
Points 2 to 3, Constant-Volume (known as isovolumetric or isochoric)
heat-removal. The gas is passed through the regenerator, where it cools
transferring heat to the regenerator for use in the next cycle.
Points 3 to 4, Isothermal Compression. The compression space and
associated heat exchanger are maintained at a constant low temperature
so the gas undergoes near-isothermal compression rejecting heat to the
Points 4 to 1, Constant-Volume (known as isovolumetric or isochoric)
heat-addition. The gas passes back through the regenerator where it
recovers much of the heat transferred in 2 to 3, heating up on its way to
the expansion space.
Theoretical efficiency equals that of the hypothetical Carnot cycle - i.e. the
highest efficiency attainable by any heat engine. However, though it is useful
for illustrating general principles, the text book cycle it is a long way from
representing what is actually going on inside a practical Stirling engine and
should not be regarded as a basis for analysis. In fact it has been argued that its
indiscriminate use in many standard books on engineering thermodynamics has
done a disservice to the study of Stirling engines in general., For a more
exhaustive treatment of the 'real' Stirling cycle see main article referred to at
the head of this section.
Other real-world issues reduce the efficiency of actual engines, due to limits of
convective heat transfer, and viscous flow (friction). There are also practical
mechanical considerations, for instance a simple kinematic linkage may be
favoured over a more complex mechanism needed to replicate the idealized
cycle, and limitations imposed by available materials such as non-ideal
properties of the working gas, thermal conductivity, tensile strength, creep,
rupture strength, and melting point.
DATAS & CALCULATIONS
t=24 min = 1440 s
Amount of alcohol V=3.6 ml
Alcohol density =0.83g/ml
Specific thermal power =25kJ/g
Mass of alcohol burnt per second =m/t =2.1x10¯³ g/s
Thermal power of the burner =Ph=52.5J/s
M(10¯³Nm) N(min¯¹) T1(ºC) T2(ºC) Wm(mJ) f(Hz) Pm(mW) Wpv(mJ) Wfr(mJ)
0 1100 136 37.9 0 18.3 0 49 49.0
4 992 167 73.2 25.1x10¯³ 16.5 414x10¯³ 24 23.9
6 840 172 72.8 37.7x10¯³ 14.0 528x10¯³ 27 26.9
8 880 168 73.0 50.3x10¯³ 14.7 739x10¯³ 25 24.9
10 731 173 72.6 62.8x10¯³ 12.2 766x10¯³ 29 28.9
N1(mole) 49/(87x8,31) =6.7x10¯²
N2(mole) 24/(120x8.31)=2.4 x10¯²
N3(mole) 27/(122.5x8.31)= 2.6x10¯²
N4(mole) 25/(120.5x8.31)=2.5 x10¯²
N5(mole) 29/(122.5x8.31)=2.8 x10¯²
Wh(J/s²) η(m.s²) ηth ηm(J/W)
2.86 0 7.2 x10¯¹ 0
3.18 8.8 x10¯³ 5.6 x10¯¹ 1 x10¯³
3.75 10.0 x10¯³ 5.8 x10¯¹ 1.4 x10¯³
3.57 14.1 x10¯³ 5.7 x10¯¹ 2 x10¯³
4.30 14.6 x10¯³ 5.8 x10¯¹ 2.1 x10¯³
Conclusion: Stirling engine is widely used on engineering applications. On
this experiment principle of stirling engine is explained . But experiment values
were incorrect mostly because of our fault on observing(forgive us for this
please). Also there were some connection problems on jacks of cables. That
may cause on faulty observation. In general experiment was succesful on
understanding heat-work convention on Stirling engine.
|
In order to graph points on the coordinate plane, you have to understand the organization of the coordinate plane and know what to do with those (x, y) coordinates. If you want to know how to graph points on the coordinate plane, just follow these steps.
Understanding the Coordinate Plane
1Understand the axes of the coordinate plane. When you're graphing a point on the coordinate plane, you will graph it in (x, y) form. Here is what you'll need to know:
- The x-axis goes left and right, the second coordinate is on the y-axis.
- The y-axis goes up and down.
- Positive numbers go up or right (depending on the axis). Negative numbers go left or down.
2Understand the quadrants on the coordinate plane. Remember that a graph has four quadrants (typically labeled in Roman numerals). You will need to know which quadrant the plane is in.
- Quadrant I gets (+,+); quadrant I is above and to the left of the y-axis.
- Quadrant IV gets (+,-); quadrant IV is below the x-axis and to the right of the y-axis. (5,4) is in quadrant I.
- (-5,4) is in Quadrant II. (-5,-4) is in Quadrant III. (5,-4) is in Quadrant IV.
Graphing a Single Point
1Start at (0, 0), or the origin. Just go to (0, 0), which is the intersection of the x and y axes, right in the center of the coordinate plane.
2Move over x units to the right or left. Let's say you're working with the set of coordinates (5, -4). Your x coordinate is 5. Since five is positive, you'll need to move over five units to the right. If it was negative, you would move over 5 units to the left.
3Move over y units up or down. Start where you left off, 5 units to the right of (0, 0). Since your y coordinate is -4, you will have to move down four units. If it were 4, you would move up four units.
4Mark the point. Mark the point you found by moving over 5 units to the right and 4 units down, the point (5, -4), which is in the 4th quadrant. You're all done.
Following Advanced Techniques
1Learn how to graph points if you're working with an equation. If you have a formula without any coordinates, then you'll have to find your points by choosing a random coordinate for x and seeing what the formula spits out for y. Just keep going until you've found enough points and can graph them all, connecting them if necessary. Here's how you can do it, whether you're working with a simple line, or a more complicated equation like a parabola:
- Graph points from a line. Let's say the equation is y = x + 4. So, pick a random number for x, like 3, and see what you get for y. y = 3 + 4 = 7, so you have found the point (3, 7).
- Graph points from a quadratic equation. Let's say the equation of the parabola is y = x2 + 2. Do the same thing: pick a random number for x and see what you get for y. Picking 0 for x is easiest. y = 02 + 2, so y = 2. You have found the point (0, 2).
2Connect the points if necessary. If you have to make a line graph, draw a circle, or connect all of the points of a parabola or another quadratic equation, then you'll have to connect the points. If you have a linear equation, then draw lines connecting the points from left to right. If you're working with a quadratic equation, then connect the points with curved lines.
- Unless you are only graphing a point, you will need at least two points. A line requires two points.
- A circle requires two points if one is the center; three if the center is not included (Unless your instructor has included the center of the circle in the problem, use three).
- A parabola requires three points, one being the absolute minimum or maximum; the other two points should be opposites.
- A hyperbola requires six points; three on each axis.
3Understand how modifying the equation changes the graph. Here are the different ways that modifying the equation changes the graph:
- Modifying the x coordinate moves the equation left or right.
- Adding a constant moves the equation up or down.
- Turning it negative (multiplying by -1) flips it over; if it is a line, it will change it from going up to down or going down to up.
- Multiplying it by another number will either increase or decrease the slope.
4Follow an example to see how modifying the equation changes the graph. Consider the equation y = x^2 ; a parabola with its base at (0,0). Here are the differences you will see as you modify the equation:
- y = (x-2)^2 is the same parabola, except it is graphed two spaces to the right of the origin; its base is now at (2,0).
- y = x^2 + 2 is still the same parabola, except now it is graphed two spaces higher at (0,2).
- y = -x^2 (the negative is applied after the exponent ^2) is an upside down y = x^2; its base is (0,0).
- y = 5x^2 is still a parabola, but it gets larger even faster, giving it a thinner look.
How do I draw the graph of 3y=2x?wikiHow ContributorSolve the equation for y; y will equal 2/3 x. Pick several values for x. Multiply each value of x by 2/3 (for example, if x= 2 then y will be 2/3 times 2 or 4/3). Once you have done this for all of your x values, you are ready to graph. Locate your first x value on the horizontal axis and go up to the y value you calculated and make a small dot. Do this for each x value--connect the dots with a smooth line or curve as (in this case, it will be a line).
Can you help me with (-6,-3) and determine the quadrant and answer?wikiHow ContributorQuadrant 3
How do I graph y=3x?wikiHow ContributorSubstitute the x values on your cartesian plan into your equation, which will give you the corresponding y values. You can then plot it onto your graph.
Where do the (+,+), (-,-), (+,-), and (-,+) notations indicate on a coordinate graph?wikiHow ContributorThese notations indicate whether the X and Y values of a coordinate are positive or negative. You can use this information to determine the quadrant of a point on the coordinate plane. Quadrant I is (+,+), Quadrant II is (-,+), Quadrant III is (-,-), and Quadrant IV is (+,-).
How do I plot data points with one or both coordinates being negative?wikiHow ContributorFirst, look at the x-axis. Now locate the x-axis coordinate. For example, if the full coordinate is (-5,-9), the x-axis coordinate is -5. Find -5 on the x-axis. Once you find the x-axis coordinate, look at the y-axis. The y-axis coordinate is -9, so locate -9 on the y-axis. When you find both coordinates, plot the dot where the two number lines meet.
- If you are making these, you will most likely have to read them too. A good way to remember to go along the x axis first and the y second, is to pretend that you are building a house, and you have to build the foundation (along the x axis) fist before you can build up. This is the same the other way; if you go down, pretend you are making the basement. You still need a foundation, and to start at the top.
- A good way of remembering which axis is which is to imagine the vertical axis having a small slanted line on it, making it look like a "y".
- Axes are basically horizontal and negative number lines, with both intersecting at the origin (the origin on a coordinate plane is zero, or where both axes intersect). Everything "originates" from the origin.
Categories: Coordinate Geometry
In other languages:
Português: Representar Pontos num Plano Cartesiano, Deutsch: Punkte in ein Koordinatensystem zeichnen, Русский: нанести точки на координатную плоскость, Español: graficar puntos en el plano cartesiano, Italiano: Disegnare dei Punti sul Piano Cartesiano, Français: placer des points dans un plan cartésien, 中文: 在坐标平面中描点, Bahasa Indonesia: Menggambarkan Titik Titik Pada Bidang Koordinat
Thanks to all authors for creating a page that has been read 214,727 times.
|
A clever new design introduces a way to image the vast ocean floor.
- Neither light- nor sound-based imaging devices can penetrate the deep ocean from above.
- Stanford scientists have invented a new system that incorporates both light and sound to overcome the challenge of mapping the ocean floor.
- Deployed from a drone or helicopter, it may finally reveal what lies beneath our planet's seas.
A great many areas of the ocean floor covering about 70 percent of the Earth remain unmapped. With current technology, it's an extremely arduous and time-consuming task, accomplished only by trawling unmapped areas with sonar equipment dangling from boats. Advanced imaging technologies that work so well on land are stymied by the relative impenetrability of water.
That may be about to change. Scientists at Stanford University have announced an innovative system that combines the strengths of light-based devices and those of sound-based devices to finally make mapping the entire sea floor possible from the sky.
The new system is detailed in a study published in IEEE Explore.
"Airborne and spaceborne radar and laser-based, or LIDAR, systems have been able to map Earth's landscapes for decades. Radar signals are even able to penetrate cloud coverage and canopy coverage. However, seawater is much too absorptive for imaging into the water," says lead study author and electrical engineer Amin Arbabian of Stanford's School of Engineering in Stanford News.
One of the most reliable ways to map a terrain is by using sonar, which deduces the features of a surface by analyzing sound waves that bounce off it. However, If one were to project sound waves from above into the sea, more than 99.9 percent of those sound waves would be lost as they passed into water. If they managed to reach the seabed and bounce upward out of the water, another 99.9 percent would be lost.
Electromagnetic devices—using light, microwaves, or radar signals—are also fairly useless for ocean-floor mapping from above. Says first author Aidan Fitzpatrick, "Light also loses some energy from reflection, but the bulk of the energy loss is due to absorption by the water." (Ever try to get phone service underwater? Not gonna happen.)
The solution presented in the study is the Photoacoustic Airborne Sonar System (PASS). Its core idea is the combining of sound and light to get the job done. "If we can use light in the air, where light travels well, and sound in the water, where sound travels well, we can get the best of both worlds," says Fitzpatrick.
An imaging session begins with a laser fired down to the water from a craft above the area to be mapped. When it hits the ocean surface, it's absorbed and converted into fresh sound waves that travel down to the target. When these bounce back up to the surface and out into the air and back to PASS technicians, they do still suffer a loss. However, using light on the way in and sound only on the way out cuts that loss in half.
This means that the PASS transducers that ultimately retrieve the sound waves have plenty to work with. "We have developed a system," says Arbabian, "that is sensitive enough to compensate for a loss of this magnitude and still allow for signal detection and imaging." Form there, software assembles a 3D image of the submerged target from the acoustic signals.
PASS was initially designed to help scientists image underground plant roots.
Although its developers are confident that PASS will be able to see down thousands of meters into the ocean, so far it's only been tested in an "ocean" about the size of a fish tank—tiny and obviously free of real-world ocean turbulence.
Fitzpatrick says that, "current experiments use static water but we are currently working toward dealing with water waves. This is a challenging, but we think feasible, problem."
Scaling up, Fitzpatrick adds, "Our vision for this technology is on-board a helicopter or drone. We expect the system to be able to fly at tens of meters above the water."
Welcome to the world's newest motorsport: manned multicopter races that exceed speeds of 100 mph.
- Airspeeder is a company that aims to put on high-speed races featuring electric flying vehicles.
- The so-called Speeders are able to fly at speeds of up to 120 mph.
- The motorsport aims to help advance the electric vertical take-off and landing (eVTOL) sector, which could usher in the age of air taxis.
Airspeeder, the world's newest motorsport, is set to debut its first race in 2021.
What can you expect to see? Something like a mix between Red Bull's air racing and the pod-racing scenes from "Star Wars: The Phantom Menace" — manned electric cars flying close together in the desert at 120 mph, nose-diving off cliffs, and racing over lakes, all while hopefully avoiding collisions.
Airspeeder calls its vehicles flying electric cars, but it's probably easier to think of the wheelless multicopters as car-sized drones. Powered by electric batteries, the carbon-fiber craft use eight propellers to fly, and the tiltable motors are designed to allow pilots to navigate through the course's pylons at high speeds.
To prevent crashes, Airspeeder is working with the companies Acronis and Teknov8 to develop "high-speed collision avoidance" systems for its Speeders.
"As they compete, Speeders will utilise cutting-edge LiDAR and Machine Vision technology to ensure close but safe racing, with defined and digitally governed no-fly areas surrounding spectators and officials," Airspeeder wrote in a blog post.
Beyond motorsports, Airspeeder hopes to help advance the electric vertical take-off and landing (eVTOL) sector. This sector is where companies like Uber, Hyundai, and Airbus are working to develop air taxis, which could someday take the ridesharing industry into the skies. By 2040, the autonomous urban aircraft industry could be worth $1.5 trillion, according to a 2019 report from Morgan Stanley.
Still, many technical and regulatory hurdles remain. Matt Pearson, Airspeeder's founder and CEO, thinks the futuristic motorsport will help to not only speed up that process, but also pave the way for self-driving cars.
"Even with autonomous vehicles on the ground, it's a difficult thing to get right because computers have to make decisions very fast," Airspeeder's founder and CEO, Matt Pearson, told GQ." But in a racing environment, you have a pretty controlled course and you have the ability to make all the vehicles cooperate with each other. You have a whole load of vehicles talking to each other, so if there's an incident or a pilot slows down or there's a traffic jam on the course they're all aware of each other. This is something we think will revolutionise autonomous vehicles on the ground. It's technology that will make flying cars a reality in our cities in the future."
Airspeeder has yet to announce a date for the first race, but Pearson said he hopes to put on three races over the first season. The company is developing two courses: one in California's Mojave Desert, and one near Coober Pedy in South Australia.
Here's how the world's technology conversations are changing.
COVID is changing the world and our technology conversations are changing with it.
As part of work identifying promising technology use cases to combat COVID, The Boston Consulting Group recently used contextual AI to analyze more than 150 million English language media articles from 30 countries published between December 2019 to May 2020. While the research reflects media coverage and not technology development, the analysis still reveals a range of shifting interests. These shifts can give a sense for how the world has refocused itself to tackle the crisis in the short term. The findings also show the crisis could be leaving some key risks or solutions under-discussed or under-explored, potentially creating new vulnerabilities.
Tech talk after COVID
As can be seen in the below figure, only half of the top tech/telco topics pre-COVID remain in the top ten during COVID. This speaks to a fundamental change in core interests and principal concerns.
Image: World Economic Forum, Boston Consulting Group
The first priority during this pandemic has been the protection of individuals, and rightly so. As a result, topics such as biotech/medtech have gained prominence as researchers seek out new treatments and a potential vaccine. This shift has fueled a new interest in telemedicine as well. This technology was slow in adoption for outpatient care pre-COVID, but has seen enormous growth in the past 6 months, as lockdowns and the virus forced patients and doctors to seek new solutions for care.
The coronavirus has also brought new uncertainties. With this, data analytics has risen 35% from pre-COVID levels, as individuals and companies use emerging data from medical research and emerging habits to forecast everything from the path of the pandemic to potential supply chain disruptions.
Talk regarding delivery drones has increased by 57% in topic share, thanks in part to new uses of drones to deliver much-needed supplies such as groceries and PPE in areas hard to reach after COVID-19 lockdowns.
COVID-19 boosted the number of articles written about 5G, though the context for these conversations has shifted. Articles pre-COVID focused on potential capabilities from a 5G rollout. As the virus spread, however, fear sparked by conspiracy theorists linked 5G technology to misinformation campaigns.
Image: World Economic Forum, Boston Consulting Group
As part of this research, analysis dug into top discussion topics in 4 of the world's key regions: India, China, the European Union and the US. Here contextual AI studied more than 2,500 publications between January and May 2020. To be sure, a number of factors determine the types of media coverage that emerges in different regions. Still, this exercise is another window into how technology conversations differed across contexts as countries faced the virus in different ways, leveraging different tools and resources.
Left unsaid or under-discussed
As the pandemic spread, "business as usual" gave way to crisis management. As it did, traditionally-popular tech topics such as artificial intelligence and machine learning, internet of things, blockchain, robotics, and cybersecurity have been discussed less often than usual during the pandemic. In some cases they have fallen off the map entirely.
This trend reflects the need tackle the immediate health needs of the crisis while ignoring the key role that other technologies or risks will play in more long-term solutions.
Blockchain, for instance, will be key for more resilient supply chains and integral to the equitable deployment of the vaccine once it is available.
AI and machine learning have fallen in rank but have shown to be integral to a range of efforts, including helping researchers sort through massive amounts of data quickly to process the real-time information being processed about the disease.
Cybersecurity has fallen off the list of top ten tech topics entirely and that fact belies the growing risk that cyber security poses to a range of sectors, including the newly remote workforce. Additionally, the medical field is particularly vulnerable to cyber threats and the past months have seen a 75% increase in ransomware attacks against health professionals (Some attacks have even targeted researchers seeking a cure for COVID.) The World Health Organization reported a five-fold increase in cyberattacks between this year and last.
The global pandemic is forcing us to re-think the way we work and how we do business. As our attentions focus on the most immediate threats, we must remember to consider the longer term. Looking past just the crisis before us can give us a fuller picture of the risks we face - as well as the opportunities we might not be exploring and the solutions we can put into place.
Proponents of drones in foreign conflicts argue that it reduces harm for civilians and U.S. military personnel alike. Here's why that might be wrong.
- There has been a huge increase in drone usage since the war on terror. Proponents of drone warfare claim it reduces civilian casualties and collateral damage, that it's cheaper than conventional warfare tactics, and that it's safer for U.S. military personnel.
- The data suggests those claims may be false, says scholar Abigail Blanco. Drones are, at best, about equivalent to conventional technologies, but in some cases may actually be worse.
- Blanco explains how skewed US government definitions don't give honest data on civilian casualties. Drone operators also suffer worse psychological repercussions following a drone strike because of factors such as the intimacy of prolonged surveillance and heat-sensing technology which lets the operator observe the heat leaving a dying body to confirm a kill.
The FAA is not amused by flame-throwing drones.
- Federal Aviation Administration publishes a notice warning the public not to weaponize their personal drones.
- Doing so will result in a $25,000 fine per violation.
- The FAA is keeping pace with the rapid development of this new and popular technology.
Drones can be an incredible boon to society if properly utilized. Farmers can monitor their crops with them and public agencies can react faster to emergencies. And these are just a few of the possibilities drones grant us. This said, they may also be capable of incredible destruction.
Indeed, the United States Federal Aviation Administration recently published a notice addressed to the public — and directed an email to all remote drone pilots — warning about the illegality of weaponizing a drone.
For the past several years, there's been a rapid increase in drone aircraft for both hobbyist and commercial purposes. The federal government has been trying to keep pace with the breakneck speed of drone innovation. They've put in place rules to ensure unmanned aerial vehicles don't interfere with commercial or military air traffic, while also making sure that all drone operators who engage in commercial services obtain a special Part 107 license.
It was only a matter of time before they'd have to regulate the weaponization of them as well.
Recently, a company called ThrowFlame made some waves when it announced its "TF-19" Wasp flamethrower attachment for drones. It works with most unmanned drones that can handle a payload of five pounds or more. ThrowFlame claims this isn't intended as a weapon. The FAA disagrees.
FAA’s response to weaponized drones
Published under the title: "Drones and Weapons, A Dangerous Mix," FAA officials put forth some clear guidelines on the illegality of weaponizing drones without proper licensing:
"Perhaps you've seen online photos and videos of drones with attached guns, bombs, fireworks, flamethrowers, and other dangerous items. Do not consider attaching any items such as these to a drone because operating a drone with such an item may result in significant harm to a person and to your bank account."
Just a single infraction will end up costing you $25,000. Unless you're a defense contractor or have another special case, it's doubtful you'd need to weaponize your drone.
Weaponizing a drone is in violation of the 2018 FAA Reauthorization Act, Section 363, which stipulates under the section "Prohibition Against Weapons," that "Unless authorized by the Administrator, a person may not operate an unmanned aircraft or unmanned aircraft system that is equipped or armed with a dangerous weapon."
A "dangerous weapon" includes any item that can be used or is capable of causing serious bodily harm, or death. Flamethrowers definitely fit the bill.
For a new technology like this for which the public is already a bit apprehensive about — it's probably best if we don't start throwing flamethrowers on them just because we can.
|
Peering into deep space with an instrument they built, a group of students and researchers caught a surprising glimpse of a newly discovered black hole (opens in new tab)30,000 light-years from Earth.
In the fall of 2019, students and researchers from the Massachusetts Institute of Technology and Harvard University were working with an instrument that they designed and now operate, the Regolith X-Ray Imaging Spectrometer (REXIS), which is on board NASA's (opens in new tab)OSIRIS-REx spacecraft. While using the shoebox-size instrument to observe the asteroid Bennu, the spacecraft's destination, the team made an unexpected detection: a new black hole in the constellation Columba.
REXIS measures X-rays that are emitted from objects like Bennu in response to solar radiation. On Nov. 11, 2019, the collaborative group of researchers and students spotted X-rays radiating from a point off the edge of Bennu.
Related Quiz: How Well Do You Know Nature's Weirdest Creations?
"Our initial checks showed no previously cataloged object in that position in space," Branden Allen, a Harvard research scientist and student supervisor who first noticed the radiation in the data, said in a NASA statement (opens in new tab).
Upon further analysis, the team found that the X-rays seen off the edge of Bennu were coming from a newly flaring black hole X-ray binary. The black hole, known as MAXI J0637-430, was discovered just a week before these observations by researchers using Japan's MAXI telescope, which operates from aboard the International Space Station. The X-rays were also detected by NASA's Neutron Star Interior Composition Explorer (NICER) telescope, which is also on the space station.
Although both telescopes were able to detect the X-rays from low Earth orbit, REXIS detected the event while millions of miles from Earth. This observation marks the first time that such an outburst has been detected from interplanetary space, according to the statement.
"Detecting this X-ray burst is a proud moment for the REXIS team. It means our instrument is performing as expected and to the level required of NASA science instruments," Madeline Lambert, an MIT graduate student who designed the instrument's command sequences (which ended up revealing the black hole), said in the statement.
The REXIS instrument, in addition to observing the cosmos, provides an opportunity for students and young scientists to get hands-on experience. To date, nearly 100 undergraduate and graduate students have worked on the REXIS team.
- Stephen Hawking's Best Books: Black Holes, Multiverses and Singularities
- The Strangest Black Holes in the Universe
- 6 Everyday Things That Happen Strangely in Space
All About Space magazine (opens in new tab) takes you on an awe-inspiring journey through our solar system and beyond, from the amazing technology and spacecraft that enables humanity to venture into orbit, to the complexities of space science.
|
Climate change latest news from NASA
About Climate Change & Global Warming
In the mid-2030s, every U.S. coast will experience rapidly increasing high-tide floods, when a lunar cycle will amplify rising sea levels caused by climate change.
High-tide floods – also called nuisance floods or sunny day floods – are already a familiar problem in many cities on the U.S. Atlantic and Gulf coasts. The National Oceanic and Atmospheric Administration (NOAA) reported a total of more than 600 such floods in 2019. Starting in the mid-2030s, however, the alignment of rising sea levels with a lunar cycle will cause coastal cities all around the U.S. to begin a decade of dramatic increases in flood numbers, according to the first study that takes into account all known oceanic and astronomical causes for floods.
Led by the members of the NASA Sea Level Change Science Team from the University of Hawaii, the new study shows that high tides will exceed known flooding thresholds around the country more often. What’s more, the floods will sometimes occur in clusters lasting a month or longer, depending on the positions of the Moon, Earth, and the Sun. When the Moon and Earth line up in specific ways with each other and the Sun, the resulting gravitational pull and the ocean’s corresponding response may leave city dwellers coping with floods every day or two.
What Is the Sun's Role in Climate Change?
From NASA's Global Climate Change Website
The Sun powers life on Earth; it helps keep the planet warm enough for us to survive. It also influences Earth’s climate: We know subtle changes in Earth’s orbit around the Sun are responsible for the comings and goings of the past ice ages. But the warming we’ve seen over the last few decades is too rapid to be linked to changes in Earth’s orbit, and too large to be caused by solar activity.
The Sun doesn’t always shine at perpetually the same level of brightness; it brightens and dims slightly, taking 11 years to complete one solar cycle. During each cycle, the Sun undergoes various changes in its activity and appearance. Levels of solar radiation go up or down, as does the amount of material the Sun ejects into space and the size and number of sunspots and solar flares. These changes have a variety of effects in space, in Earth’s atmosphere and on Earth’s surface.
The current solar cycle began January 4, 2008, and appears to be headed toward the lowest level of sunspot activity since accurate recordkeeping began in 1750. It’s expected to end sometime between now and late 2020. Scientists don’t yet know with confidence how strong the next solar cycle may be.
What Effect Do Solar Cycles Have on Earth’s Climate?
According to the United Nations’ Intergovernmental Panel on Climate Change (IPCC), the current scientific consensus is that long and short-term variations in solar activity play only a very small role in Earth’s climate. Warming from increased levels of human-produced greenhouse gases is actually many times stronger than any effects due to recent variations in solar activity.
For more than 40 years, satellites have observed the Sun's energy output, which has gone up or down by less than 0.1 percent during that period. Since 1750, the warming driven by greenhouse gases coming from the human burning of fossil fuels is over 50 times greater than the slight extra warming coming from the Sun itself over that same time interval.
NASA and France’s space agency Centre National d’Études Spatiales (CNES) started jointly flying satellite altimeters in the early 1990s, beginning a continuous space-based record of sea surface height with high accuracy and near-global coverage. That legacy continues with 2020 launch of the joint U.S.-
European Sentinel-6 Michael Freilich mission and its altimeter, which will provide scientists with an uninterrupted satellite record of sea level surpassing three decades. The mission is a partnership between NASA, NOAA, ESA (European Space Agency), the European Organisation for the Exploration of Meteorological Satellites, and CNES.
NASA sea level researchers have long worked to understand how Earth’s changing climate affects the ocean. Along with launching satellites that contribute data to the long global record of sea surface height, NASA-supported scientists look to understand the causes of sea level change globally and regionally.
Through testing and modeling they work to forecast how much coastal flooding U.S. communities will experience by the mid-2030s and provide an online visualization tool that enables the public to see how specific areas will be affected by sea level rise. Agencies at the federal, state, and local levels use NASA data to inform their plans on adapting to and mitigating the effects of sea level rise.
The above graph compares global surface temperature changes (red line) and the Sun's energy that Earth receives (yellow line) in watts (units of energy) per square meter since 1880. The lighter/thinner lines show the yearly levels while the heavier/thicker lines show the 11-year average trends. Eleven-year averages are used to reduce the year-to-year natural noise in the data, making the underlying trends more obvious.
The amount of solar energy that Earth receives has followed the Sun’s natural 11-year cycle of small ups and downs with no net increase since the 1950s. Over the same period, global temperature has risen markedly. It is therefore extremely unlikely that the Sun has caused the observed global temperature warming trend over the past half-century. Credit: NASA/JPL-Caltech
Satellites Help Scientists Track
Dramatic Wetlands Loss in
Researchers mapped land change in coastal Louisiana from 1984 to 2020. Basins that failed to build new soil, such as Terrebonne and Barataria, experienced the most land loss -- more than 180 square miles (466 square kilometers). Credit: Jensen et al. Journal of Geophysical Research: Biogeosciences
From Lake Pontchartrain to the Texas border, Louisiana has lost enough wetlands since the mid-1950s to cover the entire state of Rhode Island. Using a first-of-its-kind model, NASA-funded researchers quantified those wetlands losses at nearly 21 square miles (54 square kilometers) per year since the early 1980s.
In the new study, scientists used the NASA-U.S. Geological Survey Landsat satellite record to track shoreline changes across Louisiana from 1984 to 2020. Some of those wetlands were submerged by rising seas; others were disrupted by oil and gas infrastructure and hurricanes. But the primary driver of losses was coastal and river engineering, which can have positive or negative effects depending on how it is implemented.
Centimeter-by-centimeter, wetlands are built by the slow accumulation – accretion – of mineral sediment and organic material carried by rivers and streams. Accretion makes new soil and counters erosion, the sinking of land, and the rise of sea level.
Human intervention and engineering often hold back or divert the flow of sediments that naturally accrete to build and replenish wetlands. For instance, reinforced levees and thousands of miles of canals and excavated banks have isolated many wetlands from the Mississippi River and the network of streams that course through its delta like veins and capillaries. In a few cases, engineering projects have added sediment to delta areas and built new land.
By analyzing Landsat imagery with tools from cloud computing, the researchers developed a remote sensing model that focused on accretion or the lack of it. Basins that failed to build new soil, such as Terrebonne and Barataria, experienced the most land loss over the study period -- more than 180 square miles (466 square kilometers). Other areas gained ground, including 33.6 square miles (87 square kilometers) of new land in the Atchafalaya Basin and 43 square miles (112 square kilometers) in the area known as the “Bird’s Foot Delta” at the mouth of the Mississippi River.
“The Louisiana coastal system is highly engineered,” said Daniel Jensen, lead author and postdoctoral researcher at NASA’s Jet Propulsion Laboratory in Southern California. “But the fact that ground has been gained in some places indicates that, with enough restoration efforts to reintroduce fresh water supply and sediment, we could see some wetland recovery in the future.”
Understanding wetland dieback and recovery is critically important because the Mississippi River Delta, like many of the world’s deltas, drives local and national economies through farming, fisheries, tourism, and shipping. “For the 350 million people who live and farm on deltas around the world, coastal wetlands provide a key link in the food chain,” said JPL’s Marc Simard, principal investigator of NASA’s Delta-X mission and co-author of the paper.
In several airborne and field campaigns since 2016, the Delta-X research team has been studying the Mississippi River Delta, the seventh largest on Earth, using airborne sensing and field measurements of water, vegetation, and sediment changes in the face of rising sea level. The Landsat analysis builds on this airborne mission. Delta-X is part of NASA's Earth Venture Suborbital (EVS) program, managed at NASA's Langley Research Center in Hampton, Virginia.
The new model by Jensen and colleagues is the first to directly estimate soil accretion rates in coastal wetlands using satellite data. Working with ground-based accretion records from Louisiana’s Coastwide Reference Monitoring System, the scientists were able to estimate amounts of mineral sediment from water pixels in the Landsat imagery and organic material from the land pixels.
The researchers said their approach could be applied beyond Louisiana because wetland loss and resiliency is a global phenomenon. From the Great Lakes to the Nile Delta, the Amazon to Siberia, wetlands are found on every continent except Antarctica. And they are declining in most places. Wetlands were recently called some of the “most vulnerable, most threatened, most valuable, and most diverse” ecosystems on the planet, according to an international analysis co-authored by NASA researchers.
But they also said a new generation of spaceborne tools, such as synthetic aperture radar, can increasingly inform conservation policies on the ground. This is because satellites support near-continuous mapping of ecosystems at a scale and consistency that is nearly impossible through traditional surveys and field work.
The futures of our wetlands and coastal communities are intertwined with climate change, so sustainable management is critical. By storing decomposing plant matter in soil and roots, wetlands act as “blue carbon” sinks, preventing some greenhouse gases (carbon dioxide and methane) from escaping into the atmosphere. When vegetation dies, drowns, and fails to grow back, wetlands can no longer sequester (bury) carbon in soil and vegetation. At current rates of wetland loss in coastal Louisiana, carbon burial may have decreased 50% from 2013 estimates.
“Forty percent of the human population lives within a hundred kilometers of a coast,” Simard said. “It’s critical that we understand the processes that protect those lands and the livelihood of the people living there.”
Map of soil accretion in coastal Louisiana, showing higher buildup in parts of Atchafalaya and the “Bird’s Foot Delta,” where the Mississippi River system deposits mineral-rich sediment during flood periods. Credit: Jensen et al. Journal of Geophysical Research: Biogeosciences..
|
When function is called within the same function, it is known as recursion in C++. The function which calls the same function, is known as recursive function.
A function that calls itself, and doesn't perform any task after function call, is known as tail recursion. In tail recursion, we generally call the same function with return statement.
Let's see a simple example of recursion.
C++ Recursion Example
Let's see an example to print factorial number using recursion in C++ language.
Enter any number: 5 Factorial of a number is: 120
We can understand the above program of recursive method call by the figure given below:
|
The democratic system in Sweden
Last updated: 3 10 2018
Sweden is a representative democracy and is governed on the basis of a democratic structure at different levels of society. Sweden is also a monarchy. This means that we have a king or queen who is the country's head of state. However, the head of state has no political power and has a merely ceremonial role. It is the democratically elected politicians who run the country.
The western Riksdag building is reflected in Strömmen with a view from the west. Photo: Melker Dahlstrand
The Instrument of Government, which is the fundamental law that determines how Sweden is governed, begins with the sentence "All public power in Sweden proceeds from the people". This means that all decisions made a different levels of society have to be based on the opinions and interests on Sweden's inhabitants.
Decisions are made at three different political levels in Sweden. These levels are the municipalities, the county councils/regions and the central government. As Sweden is a member of the European Union (EU), there is also a level of decision-making above the central government. The EU is a European association of, at the moment, 28 countries. At all levels, there are politicians that the inhabitants have voted into power in general elections. These politicians are also called members. Politicians sit in the decision-making assemblies to which they are elected: municipal councils, county council/regional assembly, the Riksdag and the European Parliament.
In a democracy, it is important that there are built-in checks and balances so that corruption and misuse of power are avoided. One way to do this is to divide power between different actors, who are able to watch over one another in various ways. There are several examples of this in the democratic system in Sweden. One example is that municipalities and county councils are autonomous, which is one way to counteract Sweden becoming too centrally governed and the central government being the sole decision-making authority. One further example is that central government power is split between the Riksdag, which makes laws, the Government, which implements laws, and the courts, which judge based on the laws. The Riksdag also has the job of scrutinising and controlling the Government. If the Government neglects its duties, the Riksdag can force it to stand down. The fundamental laws provide the media and the general public with the opportunity to gain an insight into how Sweden is governed. All this contributes to Sweden suffering less corruption and misuse of power than many other countries.
The central government
The central government consists of the Riksdag, the Government and about 350 central government-owned companies and central government committees and authorities (the central government authorities). The Riksdag makes decisions about what is to be done in society, the Government then executes and implements these decision with the help of the Government Offices of Sweden and the central government authorities.
The Riksdag is Sweden's parliament, which enact laws. It is the highest decision-making assembly in the country. The Riksdag is made up of political representatives elected by the Swedish voters at the national level. Political power is strongly linked to political parties as the members of the Riksdag are elected as representatives of political parties. The Riksdag has 349 members who are elected every four years. The Riksdag's most important tasks are:
- making new laws and abolishing old ones,
- setting the central government budget, i.e. determining the central government's annual income (taxes and fees) and expenditure,
- scrutinising how the Government and the authorities are conducting their work, and
- appointing a Prime Minister, who in turn forms a government.
The Swedish Government
The Government has executive authority. This involves being responsible for the day-to-day work of governing the country. This includes presenting proposals for the central government budget and setting guidelines for how the central government's money is to be used, leading the Swedish Armed Forces and being responsible, together with the Riksdag, for foreign policy. The Government Offices of Sweden, where a large number of civil servants work, is there to assist the Government.
It is usually the largest political party in the Riksdag, or two or more cooperating parties if no party has a majority, that form a government. The person appointed as Prime Minister by the Riksdag chooses which ministers will be responsible for different policy areas. Each minister in the Government heads a ministry. The ministry has various departments that are responsible for different areas. For example, the Ministry of Education and Research is responsible for issues concerning schools and education, and the Ministry of Culture for cultural matters among other things.
The central government authorities
The central government authorities consist of the Government, the courts and the administrative authorities. Examples of central government authorities are Arbetsförmedlingen, the Swedish Social Insurance Agency and the Swedish Transport Administration. The Government may not dictate how an authority is to use a law or make a decision in a case concerning an authority's work. The authorities are independent, but they have to comply with the laws and guidelines the Government decides on. In many other countries, it is common for a politician who is a government minister to have the power to directly intervene in the ongoing work of authorities. There is no such opportunity in Sweden. There are laws to prevent what is known as ministerial government.
The judicial system
The judicial system normally includes those authorities responsible for the rule of law and maintaining law and order. The courts are the foundations of the judicial system. The judicial system also encompasses the authorities responsible for preventing and investigating crime, e.g. the police and the Swedish Crime Victim Compensation and Support Authority.
The Swedish courts consists of over 80 different authorities and committees. There are three types of court in in Sweden: general courts, administrative courts and special courts. The courts can lay down punishments and settle disputes. The general courts consist of district courts, courts of appeal and the Supreme Court. The general courts handle matters including criminal cases, family cases and cases between companies or private individuals. The administrative courts consist of the administrative courts, the administrative courts of appeal and the Supreme Administrative Court. The administrative courts settle disputes, primarily between individuals and authorities. This can involve tax cases, cases involving aliens or citizenship (the migration courts), disputes with the Swedish Social Insurance Agency or the municipality.
The special courts settle disputes within various special areas, for example labour law or consumer issues.
Having your case tried in a impartial and independent court is a fundamental right of all those who live in Sweden. According to the Swedish constitution, the work of the courts is governed by the law, but they are otherwise independent. Neither members of the Riksdag nor ministers may influence the courts' decisions.
The rule of law means that all people are equal before the law. A person is to be considered innocent until s/he has been found guilty by a court. The rule of law is an important aspect of democracy and defines the judicial relationship between the individual and the state. The aim is for all people to be protected from being wronged by each other, by the authorities and their representatives and by society in general, and for all people to be guaranteed their rights and freedoms. Legislation must be unequivocal: it has to be clearly stated what is legal and what is illegal. Someone who commits a crime must be able to understand what the consequences will be for him or her.
Someone who believes that an authority such as the Swedish Social Insurance Agency or a municipality has made an incorrect decision can appeal this. The authority that has made the decision has to tell them how to appeal. This information is usually at the end of the text informing the person of the decision.
Photo: Patrik Svedberg (www.domstol.se)
Everyone in Sweden lives in a municipality. The country has 290 municipalities, all of which are autonomous in many ways. A municipality is led by a democratically elected municipal council and by boards and committees appointed by the municipal council. The Local Government Act specifies what the responsibilities and obligations of county councils/regions and municipalities are. The three biggest municipalities are Stockholm, Gothenburg and Malmö, but there are many municipalities with more than 100,000 inhabitants. Municipalities can also be called cities.
The municipalities responsibilities include ensuring that there are schools, preschools and libraries, home-help services for older people and income support for those who require it. They also have to ensure that there is a fire brigade and street cleaning, they have to plan roads, housing, water and electricity. The municipalities require money to be able to deliver all these services. The municipality obtains income from municipal taxes, fees and grants from central government. Inhabitants who have an income pay tax in the municipality in which they are registered on the population register. The amount of tax someone pays depends on which municipality they live in and what their income is.
County councils, regions and counties
There are 21 counties in Sweden. There are a number of municipalities in each county. Each county has its own county administrative board. The Government appoints county governors who lead the county administrative boards. The county administrative boards are the Government's representatives in the counties. Their most important task is to achieve the goals the Riksdag and the Government have laid down, while also taking into account the circumstances of the individual county.
Sweden also has county councils (some county councils are called regions). The county council is a political organisation that covers the same geographical area as the county. The county councils have the right to impose tax and are responsible for certain public services, primarily healthcare. They are also involved in cultural issues, local public transport and regional planning. There are currently 20 county councils and regions in Sweden. The regions and county councils are led by a democratically elected assembly called the regional assembly or county council assembly. For example, it can be said that Region Västra Götaland is formally Västra Götaland County Council.
The European Parliament in Brussels. Photo: www.europaparlamentet.se
The EU is an economic and political partnership between a number of European countries. The EU was formed in the aftermath of the Second World War as an economic and political partnership between Belgium, France, Italy, Luxembourg, the Netherlands and what was then West Germany. The aim was to cooperate economically and politically in order to avoid further world wars, preserve peace and increase trade within Europe. One of the founding principles was that countries who trade with each other become economically dependent on each other and thus avoid conflict. It can be said that every member state has chosen to hand over a portion of their sovereignty to the EU in order to collectively gain greater influence in the world.
Sweden became a member of the EU in 1995. The EU now has 28 member states; these work together on matters such as the free movement of goods, services, capital and people, environmental protection and security and defence. Many of the member states have introduced the common currency, the euro; Sweden has not.
The EU has three important institutions that together make laws: The European Commission, the European Parliament and the Council of Ministers, which is also known as the Council of the European Union. All three are located in the capital of Belgium, Brussels, in the French city Strasbourg and in Luxembourg. The 28 member states cooperate in three different ways:
Decision that all member states have to comply with. This encompasses the laws made by the EU. EU legislation takes precedence over that of a member state. Many of the laws enacted are to make it easier to conduct business, travel and work within the EU. There is a court specifically for EU legislation. This is called the Court of Justice of the European Union and is located in Luxembourg.
Voluntary cooperation between the 28 member states, without legislation. For example, when the EU decides on foreign policy and military interventions, this is done at the intergovernmental level.
Each member state has the right to self-determination. However, all laws and regulations that countries enact must be consistent with what is stated in the laws and regulations there are at the supranational level.
Power is divided between many
Although formal political power is divided between different levels; municipality, county council and region, central government and the EU; there are several power centres in society that are of significance to the democratic system.
The mass media, the market and civil society are important actors and arenas in a democratic society.
The mass media (newspapers, radio, TV and internet) are independent of the state. This means that they are free to provide information about and scrutinise politicians and other people who have power in society. The mass media also have an important role in terms of creating a debate concerning topical social issues.
Radio Sweden (SR) and Swedish Television (SVT) are owned by foundations that are independent of the state. Their activities are paid for via the television and radio charge that households pay; this is known as the TV licence. The channels are therefore not funded by advertising or central government grants and are thus known as public service. Their job is to work in an impartial way and with a democratic basis. There are also several advertising-funded TV and radio channels in Sweden that scrutinise those in power, for example TV4.
The market consists of private companies and consumers that together influence the country's economy and labour market. Economic development in the enterprise sector has an impact on the state's tax income.
Civil society is the name given to a part of society in which people help each other without the direct involvement of the state. The primary motivation behind civil society is not money, as is the case with, for example, a company. Civil society is also sometimes called the non-governmental sector, the voluntary sector or the third sector. Examples of actors in civil society are charities, sports clubs and political parties that are neither directly funded by the state nor exist simply to earn money.
Popular movements in Sweden such as the labour movement or the temperance movement are examples of how civil society can be a powerful force in society, with neither the state nor the market being the driving forces. Civil society is an important part of a democratic society in which there are many way you as a member of the public can be involved in influencing society.
Democratic rights such as freedom of expression, of the press and of association are also, albeit indirectly, a call to citizens to get involved in politics. People can participate in politics in various ways, for instance by becoming involved in a political party, an organisation or an association in order to pursue various issues. People can contact various media in order to inform them about issues they find important. If you contact a journalist you have the right to have your anonymity protected. People can also contact politicians in the municipality where they live and offer suggestions or points of view about decisions that have been made.
|
the National Council of Teachers of Mathematics
In this lesson for grades 6-12, learners explore the relationship between dimension and volume. Using colored paper, students create two rectangular prisms and two cylinders to determine which holds more popcorn. They then justify their conclusions by analyzing the formulas and identifying dimensions with the largest impact on volume.
Editor's Note: This activity presents an excellent opportunity for students to gain real insight into why increasing the radius of a cylinder has more impact on volume than increased height. It will also promote understanding of why the formulas for calculating volume work.
This resource is aligned to NCTM standards and includes lesson objectives, teaching tips, and printable student worksheets with answer keys provided. It is part of a larger collection of lessons, labs, and activities developed by the National Council of Teachers of Mathematics (NCTM).
Metadata instance created
February 2, 2011
by Caroline Hall
August 17, 2016
by Lyle Barbato
Last Update when Cataloged:
July 15, 2008
AAAS Benchmark Alignments (2008 Version)
1. The Nature of Science
1A. The Scientific Worldview
3-5: 1A/E2. Science is a process of trying to figure out how the world works by making careful observations and trying to make sense of those observations.
6-8: 1A/M3. Some scientific knowledge is very old and yet is still applicable today.
1B. Scientific Inquiry
6-8: 1B/M1b. Scientific investigations usually involve the collection of relevant data, the use of logical reasoning, and the application of imagination in devising hypotheses and explanations to make sense of the collected data.
9. The Mathematical World
6-8: 9C/M7. For regularly shaped objects, relationships exist between the linear dimensions, surface area, and volume.
6-8: 9C/M10. Geometric relationships can be described using symbolic equations.
9-12: 9C/H3a. Geometric shapes and relationships can be described in terms of symbols and numbers—and vice versa.
12. Habits of Mind
12B. Computation and Estimation
6-8: 12B/M3. Calculate the circumferences and areas of rectangles, triangles, and circles, and the volumes of rectangular solids.
6-8: 12B/M7b. Convert quantities expressed in one unit of measurement into another unit of measurement when necessary to solve a real-world problem.
Common Core State Standards for Mathematics Alignments
Standards for Mathematical Practice (K-12)
MP.2 Reason abstractly and quantitatively.
Measurement and Data (K-5)
Geometric measurement: understand concepts of volume and relate volume to multiplication and to addition. (5)
5.MD.3.b A solid figure which can be packed without gaps or overlaps using n unit cubes is said to have a volume of n cubic units.
5.MD.5.b Apply the formulas V = l × w × h and V = b × h for rectangular prisms to find volumes of right rectangular prisms with whole-number edge lengths in the context of solving real world and mathematical problems.
Graph points on the coordinate plane to solve real-world and
mathematical problems. (5)
5.G.2 Represent real world and mathematical problems by graphing points in the first quadrant of the coordinate plane, and interpret coordinate values of points in the context of the situation.
Solve real-life and mathematical problems involving angle measure,
area, surface area, and volume. (7)
7.G.6 Solve real-world and mathematical problems involving area, volume and surface area of two- and three-dimensional objects composed of triangles, quadrilaterals, polygons, cubes, and right prisms.
Solve real-world and mathematical problems involving volume of
cylinders, cones, and spheres. (8)
8.G.9 Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems.
This resource is part of a Physics Front Topical Unit.
Topic: Measurement and the Language of Physics Unit Title: Applying Measurement in Physics
One of the best lessons we've found to help students get the connection between dimension and volume. They conduct an experiment to create two rectangular prisms and two cylinders, then determine which design holds the most popcorn. They will test ideas, graph outcomes, and present findings. Includes printable worksheets with answer keys.
<a href="http://www.compadre.org/precollege/items/detail.cfm?ID=10641">National Council of Teachers of Mathematics. Illuminations: Popcorn, Anyone?. Reston: National Council of Teachers of Mathematics, July 15, 2008.</a>
National Council of Teachers of Mathematics. Illuminations: Popcorn, Anyone?. Reston: National Council of Teachers of Mathematics, July 15, 2008. http://illuminations.nctm.org/Lesson.aspx?id=2927 (accessed 27 March 2017).
%0 Electronic Source %D July 15, 2008 %T Illuminations: Popcorn, Anyone? %I National Council of Teachers of Mathematics %V 2017 %N 27 March 2017 %8 July 15, 2008 %9 text/html %U http://illuminations.nctm.org/Lesson.aspx?id=2927
Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
|
In astronomy, color–color diagrams are a means of comparing the apparent magnitudes of stars at different wavelengths. Astronomers typically observe at narrow bands around certain wavelengths, and objects observed will have different brightnesses in each band. The difference in brightness between two bands is referred to as color. On color–color diagrams, the color defined by two wavelength bands is plotted on the horizontal axis, and then the color defined by another brightness difference (though usually there is one band involved in determining both colors) will be plotted on the vertical axis.
Although stars are not perfect blackbodies, to first order the spectra of light emitted by stars conforms closely to a black-body radiation curve, also referred to sometimes as a thermal radiation curve. The overall shape of a black-body curve is uniquely determined by its temperature, and the wavelength of peak intensity is inversely proportional to temperature, a relation known as Wien's Displacement Law. Thus, observation of a stellar spectrum allows determination of its effective temperature. Obtaining complete spectra for stars through spectrometry is much more involved than simple photometry in a few bands. Thus by comparing the magnitude of the star in multiple different color indices, the effective temperature of the star can still be determined, as magnitude differences between each color will be unique for that temperature. As such, color-color diagrams can be used as a means of representing the stellar population, much like a Hertzsprung–Russell diagram, and stars of different spectral classes will inhabit different parts of the diagram. This feature leads to applications within various wavelength bands.
In the stellar locus, stars tend to align in a more or less straight feature. If stars were perfect black bodies, the stellar locus would be a pure straight line indeed. The divergences with the straight line are due to the absorptions and emission lines in the stellar spectra. These divergences can be more or less evident depending on the filters used: narrow filters with central wavelength located in regions without lines, will produce a response close to the black body one, and even filters centered at lines if they are broad enough, can give a reasonable blackbody-like behavior.
Therefore, in most cases the straight feature of the stellar locus can be described by Ballesteros' formula deduced for pure blackbodies:
where A, B, C and D are the magnitudes of the stars measured through filters with central frequencies νa, νb, νc and νd respectively, and k is a constant depending on the central wavelength and width of the filters, given by:
Note that the slope of the straight line depends only on the effective wavelength, not in the filter width.
Although this formula cannot be directly used to calibrate data, if one has data well calibrated for two given filters, it can be used to calibrate data in other filters. It can be used to measure the effective wavelength midpoint of an unknown filter too, by using two well known filters. This can be useful to recover information on the filters used for the case of old data, when logs are not conserved and filter information has been lost.
The color-color diagram of stars can be used to directly calibrate or to test colors and magnitudes in optical and infrared imaging data. Such methods take advantage of the fundamental distribution of stellar colors in our galaxy across the vast majority of the sky, and the fact that observed stellar colors (unlike apparent magnitudes) are independent of the distance to the stars. Stellar locus regression (SLR) was a method developed to eliminate the need for standard star observations in photometric calibrations, except highly infrequently (once a year or less) to measure color terms. SLR has been used in a number of research initiatives. The NEWFIRM survey of the NOAO Deep Wide-Field Survey region used it to arrive at more accurate colors than would have otherwise been attainable by traditional calibration methods, and South Pole Telescope used SLR in the measurement of redshifts of galaxy clusters. The blue-tip method is closely related to SLR, but was used mainly to correct Galactic extinction predictions from IRAS data. Other surveys have used the stellar color-color diagram primarily as a calibration diagnostic tool, including The Oxford-Dartmouth Thirty Degree Survey and Sloan Digital Sky Survey (SDSS).
Analyzing data from large observational surveys, such as the SDSS or 2 Micron All Sky Survey (2MASS), can be challenging due to the huge number of data produced. For surveys such as these, color-color diagrams have been used to find outliers from the main sequence stellar population. Once these outliers are identified, they can then be studied in more detail. This method has been used to identify ultracool subdwarfs. Unresolved binary stars, which appear photometrically to be points, have been identified by studying color-color outliers in cases where one member is off the main sequence. The stages of the evolution of stars along the asymptotic giant branch from carbon star to planetary nebula appear on distinct regions of color–color diagrams. Quasars also appear as color-color outliers.
Color–color diagrams are often used in infrared astronomy to study star forming regions. Stars form in clouds of dust. As the star continues to contract, a circumstellar disk of dust is formed, and this dust is heated by the star inside. The dust itself then begins to radiate as a blackbody, though one much cooler than the star. As a result, an excess of infrared radiation is observed for the star. Even without circumstellar dust, regions undergoing star formation exhibit high infrared luminosities compared to stars on the main sequence. Each of these effects is distinct from the reddening of starlight which occurs as a result of scattering off of dust in the interstellar medium.
Color–color diagrams allow for these effects to be isolated. As the color–color relationships of main sequence stars are well known, a theoretical main sequence can be plotted for reference, as is done with the solid black line in the example to the right. Interstellar dust scattering is also well understood, allowing bands to be drawn on a color–color diagram defining the region in which stars reddened by interstellar dust are expected to be observed, indicated on the color–color diagram by dashed lines. The typical axes for infrared color–color diagrams have (H–K) on the horizontal axis and (J–H) on the vertical axis (see infrared astronomy for information on band color designations). On a diagram with these axes, stars which fall to the right of the main sequence and the reddening bands drawn are significantly brighter in the K band than main sequence stars, including main sequence stars which have experienced reddening due to interstellar dust. Of the J, H, and K bands, K is the longest wavelength, so objects which are anomalously bright in the K band are said to exhibit infrared excess. These objects are likely protostellar in nature, with the excess radiation at long wavelengths caused by suppression by the reflection nebula in which the protostars are embedded. Color–color diagrams can be used then as a means of studying stellar formation, as the state of a star in its formation can be roughly determined by looking at its position on the diagram.
- Figure modeled after E. Böhm-Vitense (1989). "Figure 4.9". Introduction to Stellar Astrophysics: Basic stellar observations and data. Cambridge University Press. p. 26. ISBN 0-521-34869-2.
- Ballesteros, F.J. (2012). "New insights into black bodies". EPL 97 (2012) 34008. arXiv:1201.1809.
- F. W. High; et al. (2009). "Stellar Locus Regression: Accurate Color Calibration and the Real-Time Determination of Galaxy Cluster Photometric Redshifts". The Astronomical Journal. 138 (1): 110–129. arXiv:0903.5302. Bibcode:2009AJ....138..110H. doi:10.1088/0004-6256/138/1/110.
- F. W. High; et al. (2010). "Optical Redshift and Richness Estimates for Galaxy Clusters Selected with the Sunyaev-Zel'dovich Effect from 2008 South Pole Telescope Observations". The Astrophysical Journal. 723 (2): 1736–1747. arXiv:1003.0005. Bibcode:2010ApJ...723.1736H. doi:10.1088/0004-637X/723/2/1736.
E. Schlafly; et al. "The Blue Tip of the Stellar Locus: Measuring Reddening with the SDSS". arXiv:1009.4933. Bibcode:2010ApJ...725.1175S. doi:10.1088/0004-637X/725/1/1175. Cite journal requires
- E. MacDonald; et al. (2004). "The Oxford-Dartmouth Thirty Degree Survey – I. Observations and calibration of a wide-field multiband survey". Monthly Notices of the Royal Astronomical Society. 352 (4): 1255–1272. arXiv:astro-ph/0405208. Bibcode:2004MNRAS.352.1255M. doi:10.1111/j.1365-2966.2004.08014.x.
- Z. Ivezic; et al. (2007). "Sloan Digital Sky Survey Standard Star Catalog for Stripe 82: The Dawn of Industrial 1% Optical Photometry". The Astronomical Journal. 134 (3): 973–998. arXiv:astro-ph/0703157. Bibcode:2007AJ....134..973I. doi:10.1086/519976.
- Burgasser, A. J.; Cruz, K.L.; Kirkpatrick, J.D. (2007). "Optical Spectroscopy of 2MASS Color-selected Ultracool Subdwarfs". Astrophysical Journal. 657 (1): 494–510. arXiv:astro-ph/0610096. Bibcode:2007ApJ...657..494B. doi:10.1086/510148.
- Gizis, J.E.; et al. (2000). "New Neighbors from 2MASS: Activity and Kinematics at the Bottom of the Main Sequence". Astronomical Journal. 120 (2): 1085–1099. arXiv:astro-ph/0004361. Bibcode:2000AJ....120.1085G. doi:10.1086/301456.
- Covey, K.R.; et al. (2007). "Stellar SEDs from 0.3 to 2.5 micron: Tracing the Stellar Locus and Searching for Color Outliers in the SDSS and 2MASS". Astronomical Journal. 134 (6): 2398–2417. arXiv:0707.4473. Bibcode:2007AJ....134.2398C. doi:10.1086/522052.
- Ortiz, R.; et al. (2005). "Evolution from AGB to planetary nebula in the MSX survey". Astronomy and Astrophysics. 431 (2): 565–574. arXiv:astro-ph/0411769. Bibcode:2005A&A...431..565O. doi:10.1051/0004-6361:20040401.
- C. Struck-Marcell; B.M. Tinsley (1978). "Star formation rates and infrared radiation". Astrophysical Journal. 221: 562–566. Bibcode:1978ApJ...221..562S. doi:10.1086/156057.
- Lada, C.J.; et al. (2000). "Infrared L-Band Observations of the Trapezium Cluster: A Census of Circumstellar Disks and Candidate Protostars". The Astronomical Journal. 120 (6): 3162–3176. arXiv:astro-ph/0008280. Bibcode:2000AJ....120.3162L. doi:10.1086/316848.
- Charles Lada; Fred Adams (1992). "Interpreting infrared color-color diagrams – Circumstellar disks around low- and intermediate-mass young stellar objects". Astrophysical Journal. 393: 278–288. Bibcode:1992ApJ...393..278L. doi:10.1086/171505.
|
- What is the basic law of supply?
- What is the function of supply?
- What is effective demand for tourism?
- What is effective supply in economics?
- What is effective demand explain with diagram?
- What are the market demands?
- What is the concept of supply?
- What are the types of supply?
- What are the two components of effective demand?
- How is effective demand determined?
- What are the two components of money supply?
- What are the types of demands?
- What is effective and ineffective demand?
- What is supply and example?
- What are the features of supply?
What is the basic law of supply?
The law of supply is the microeconomic law that states that, all other factors being equal, as the price of a good or service increases, the quantity of goods or services that suppliers offer will increase, and vice versa..
What is the function of supply?
A supply function is a mathematical expression of the relationship between quantity demanded of a product or service, its price and other associated factors such as input costs, prices of related goods, etc. … The same is the case with supply and input prices i.e. at higher input prices, supply is lower.
What is effective demand for tourism?
Actual demand also referred to as effective demand, comes from tourists who are involved in the actual process of tourism. The second type of demand is the so-called suppressed demand created by two categories of people who are generally unable to travel due to circumstances beyond their control.
What is effective supply in economics?
The amount of labor they choose to supply, contingent on the constraint on the amount of goods they can buy, is the effective supply of labor. Another example involves spillovers from credit markets to the goods market. … Firms can also exhibit effective demands or supplies that differ from notional demands or supplies.
What is effective demand explain with diagram?
Effective demand refers to the willingness and ability of consumers to purchase goods at different prices. … The importance of Keynes’ view is that effective demand may be insufficient to achieve full employment due to unemployment and workers without income to produce unsold goods.
What are the market demands?
Market demand is the total quantity demanded across all consumers in a market for a given good. Aggregate demand is the total demand for all goods and services in an economy.
What is the concept of supply?
Supply is a fundamental economic concept that describes the total amount of a specific good or service that is available to consumers. Supply can relate to the amount available at a specific price or the amount available across a range of prices if displayed on a graph.
What are the types of supply?
There are five types of supply:Market Supply: Market supply is also called very short period supply. … Short-term Supply: ADVERTISEMENTS: … Long-term Supply: … Joint Supply: … Composite Supply:
What are the two components of effective demand?
In other words, the sum of consumption expenditures and investment expenditures constitute effective demand in a two-sector economy. G stands for government expenditure. Here we ignore government expenditure as a component of effective demand.
How is effective demand determined?
The principle of ‘effective demand’ is basic to Keynes’ analysis of income, output and employment. … Stated briefly, the Principle of Effective Demand tells us that in the short period, an economy’s aggregate income and employment are determined by the level of aggregate demand which is satisfied with aggregate supply.
What are the two components of money supply?
Components of money supplyCurrency such as notes and coins with the people.Demand deposits with the banks such as savings and current account.Time deposit with the bank such as Fixed deposit and recurring deposit.
What are the types of demands?
7 types of demand are:Price demand.Income demand.Cross demand.Individual demand and Market demand.Joint demand.Composite demand.Direct and Derived demand.
What is effective and ineffective demand?
Effective demand is the desire or want backedup by the ability or willingness to pay for certain quatity of goods or services at a particular price and time…..while ineffective demand is merely a desire or want to own goods or services but not backedup by the possible means.
What is supply and example?
Examples of the Supply and Demand Concept Supply refers to the amount of goods that are available. … When supply of a product goes up, the price of a product goes down and demand for the product can rise because it costs loss. At some point, too much of a demand for the product will cause the supply to diminish.
What are the features of supply?
Supply: 4 Main Features of Supply | Micro EconomicsSupply is a desired quantity: … Supply of a commodity does not comprise the entire stock of the commodity: … Supply is always expressed with reference to price: … Supply is always with respect to a period of time:
|
Inflation is the general increase in prices for goods and services, meaning it takes more money to buy a particular item. A country's central bank determines the inflation rate as an annual percentage.
Types of inflation
Here are three of the different types of inflation:
Demand-pull inflation is a type of inflation that occurs when people increase their purchases of goods and services. This increased demand can lead to shortages and an increase in prices. Usually, demand-pull inflation occurs during periods of rapid economic growth or in an economy where the money supply is expanding rapidly.
When prices for goods and services begin to rise due to higher costs, this is called cost-push inflation. Rising wages or prices for raw materials can cause cost-push inflation. Cost-push inflation can sometimes lead to a wage-price spiral.
Built-in inflation is the amount that prices go up if the economy is operating at its potential. It is also called "expectations-augmented inflation" because prices adjust to reflect what people expect they will be in the future. If an economy grows faster than potential, then built-in inflation will be positive.
Causes of Inflation
Here are some of the factors that can cause inflation:
The Money Supply
Inflation occurs when an economy's money grows faster than the number of goods and services produced. For example, a bad harvest due to droughts or floods will decrease the supply of goods and cause prices to rise. To compensate for the supply shortage, a central bank will often expand (or "inject") more money into the economy to combat deflation.
When global demand for goods increases, this can also cause inflation. For example, if a country attempts to export more goods, this will increase the demand for those goods in other countries and cause inflation or an oversupply of goods.
When exports decrease, it can also lead to an increase in prices. For example, suppose a country's exports fall in the face of a domestic recession or a drop in international commodity prices. In that case, this will reduce the supply of household goods and cause prices to rise.
Government taxes often cause inflation. If there are high tax rates and little competition among companies, then this will result in high inflation.
When businesses replace a less profitable product with a more profitable one, it could cause inflation. For example, customers may choose to buy the more expensive product. To compensate for this purchase, the less profitable one must change prices to match the price of the more profitable product. This can cause inflation.
Disruptions in Production
When there are disruptions in production, this will also cause prices to rise. For example, strikes at factories or supply interruptions due to natural disasters can increase the prices of goods and services in an economy.
Effects of Inflation
Inflation affects many areas. These include;
Low Purchasing Power
When prices rise, the purchasing power of the consumer will go down. For example, if the consumer's income remains the same and prices increase, they must spend more money to purchase an item than before. Rising prices mean that each currency unit buys fewer goods and services.
High Debt Levels
When inflation is high, people will see their debt levels increase. For example, when a person takes out a loan with an adjustable interest rate, the debt will increase if there is inflation. This can cause a problem because the person will have to pay more interest to repay the loan.
Narrowing of Interest Rates
When inflation is high, this can cause a narrowing of interest rates. For example, if inflation rises and banks are forced to put up their interest rates, then this means that banks will charge more for loans.
Because the currency's purchasing power falls, this can lead to shortages. In other words, people will be unable to buy certain goods and services because they do not have enough money in their pockets.
When inflation is high, people often stop saving money because they know that prices will rise and their savings will not be worth much. This can mean that people cannot invest in businesses, so the economy will not grow as quickly. When there are fewer investments, then this can also mean less job creation.
Inflation creates an unstable business environment. Suppose a country experiences a high inflation rate but expects lower inflation levels in the future. In that case, the country may go into recession or depression because businesses will be unwilling to invest under these conditions.
There are several ways of preventing inflation from occurring. Here are some things that can be done to bring down inflation.
Controlling Monetary Supply
Inflation occurs when the money supply increases faster than the economy produces goods and services. If a central bank controls the money supply, it can hold back prices by bringing down inflation rates.
For example, if the money supply increases but the economy is expanding faster, this will cause inflation. However, if the money supply decreases and interest rates rise, this will slow down economic growth and cause deflation.
If a country's government attempts to control its citizens' spending, this will also be an effective way to reduce inflation. This can occur by increasing taxes or cutting spending.
For example, if a higher tax rate is imposed on wealthier people, this will effectively control the amount of money that these people can keep in their pockets and cause them to spend less. Furthermore, if income tax rates are raised to cut spending, then this can prevent the economy from growing and cause deflation.
Pulling Back on Government Spending
If inflation continues to rise and the government spends more money on specific projects, this can cause inflation to occur. However, if the government reduces spending (such as by cutting back on military spending), this could alleviate some pressure for prices to rise.
Printing Less Cash
When the government decides to print more money, this generally leads to a higher inflation rate. For example, if a government needs more money to finance budget deficits, it may print more. This can lead to inflation.
There are several ways of measuring inflation. These include:
Consumer Price Index
When the government measures the inflation rate, it often uses the Consumer Price Index (CPI). This is calculated by comparing current prices to prices from a base year.
For example, if a basket of goods costs $100 in January 2016 and $110 in January 2017, then there has been 10% inflation since 2016. The Consumer Price Index will be able to measure these price increases and show how much inflation has occurred over time.
Producer Price Index
The Producer Price Index (PPI) measures inflation in the manufacturing sector. It shows how much prices have increased since the base year.
For example, if the price of a car has fallen from $6,000 in January 2015 to $5,500 in January 2016, then there has been a 2.5% deflation since 2015. The Producer Price Index will be able to measure this decrease in prices and show how much deflation has occurred over some time.
The GDP Deflator measures inflation in the economy as a whole. It is obtained by comparing the prices of all goods and services used across all sectors of the economy.
For example, if a basket of goods costs $100 in January 2016 and $110 in January 2017, then there has been 10% inflation since 2016. The GDP Deflator will be able to measure these price increases and show how much inflation has occurred over time.
Frequently Asked Questions
For how long will inflation persist?
Inflation is often called a 'one-off' event. This means that in the short term (a few years), inflation will tend to rise, and then it will fall back to normal levels.
However, it is a long-term trend. In general, this means that inflation will stay as high or higher than it was before.
Why do central banks target inflation?
Inflation is the rate at which more money is going out of the economy than is coming in. If inflation begins to increase, this means that people will start buying less because prices are rising and real income falls. In other words, when prices rise, people will not be able to buy as much with their money.
This means that a state or nation can experience a drop in economic growth and employment if inflation gets too out of control. This is why most central banks target a 2% growth rate for price increases per year, so they can monitor how much it has risen above 2% and decide if they need to act on this figure.
Inflation can be regarded as a massive problem for many countries because it can cause the economy to shrink and can cause unemployment. However, many countries have experienced inflation before, and some have even made it part of their monetary policy.
Why is inflation bad?
When the cost of goods and services keeps rising, people will not be able to buy as much with their money, and this will cause a drop in economic growth and employment. If the cost of goods and services falls, people will be able to buy more with their money.
In conclusion, inflation can be an alarming problem for countries because it creates a lot of economic instability and uncertainty. However, many countries have shown that they can control inflation through their monetary policy and changing their financial actions.
|
Government and the
As we have seen in Chapter 3, competitive markets work automatically to set equilibrium prices and quantities. In addition, markets adjust automatically to changing conditions. If consumer demand for a product wanes, the demand curve shifts to the left, price soon falls, causing sellers to reduce their quantity supplied and to divert their resources to the production of other more profitable products. If consumer demand for a product increases, the demand curve shifts to the right, price increases, giving sellers an incentive to devote more resources of the production of the good.
Despite the somewhat self-regulating qualities of a market, the equilibrium in a market is sometimes influenced by the intervention of the government. Through intervention, the government can change the price consumers will have to pay or the price sellers will receive. The quantity that is ultimately purchased will also be influenced by government intervention.
Four types of government intervention are considered in this chapter. The first two are price controls: price ceilings and price floors. The latter two examples of government intervention are less direct: they consist of taxes and subsidies, which may be placed on goods; however, they, too, affect the equilibrium price and quantity of the good in question.
A price ceiling is simply a maximum price that may be charged for a good. It has the force of law behind it in that sellers who charge more than the price ceiling can be arrested and be subjected to fines or even prison sentences. Therefore, sellers, in order to remain within the confines of the law, must charge a price equal to or less than the price ceiling.
What is the government's purpose in placing a price ceiling on a good? Clearly, a price ceiling is meant to help the buyer by reducing the price that must be paid for a good. Why would the government want to reduce the price charged to buyers of a particular good? There is no obvious economic answer to this question. It is mostly a political decision to enact a price ceiling on a good. The good may be deemed one that is particularly important and that everyone needs. For example, it may be felt that a price ceiling on gasoline is necessary so that lower-income families will still be able to purchase the good. Or buyers may have the political power to get legislation passed that benefits them. Whatever the reason, price ceilings have very predictable effects.
Consider for a moment Figure 5.1 which represents the market for gasoline. As can be seen, the equilibrium price of gasoline is $1.50 per gallon and the equilibrium quantity is 500 million gallons per week. If the government were to place a price ceiling of $2.00 per gallon on gasoline (as shown by line Pc), what effect would this have on the market? Since gasoline must be sold at or below the price ceiling of $2.00, there is no effect. The equilibrium price and quantity will remain at their present levels. Therefore, a price ceiling that is above the current equilibrium price will have no effect on the market.
Figure 5.2 reproduces 5.1 with the same equilibrium price and quantity. Suppose now the government places a price ceiling of $1.25 per gallon on gasoline (as shown by line Pc). At that price, the quantity supplied will fall to 400 million gallons of gasoline per week while the quantity demanded increases to 600 million gallons per week. There is a shortage of gasoline at the price ceiling equal to 200 million gallons per week. Normally, when there is a shortage of a good at the current market price, that price will rise and eliminate the shortage. But because of the government's price ceiling, that will not occur in this case. There is a permanent shortage. So, in general, a price ceiling that is below the equilibrium price will cause a shortage of the good.
Now that there is a shortage, what happens next? How will the limited quantity supplied (400 million in this case) be rationed among the buyers who want to purchase 600 million gallons? That is, how will it be decided who gets the gasoline which is available? There are several possibilities. The gasoline may be sold on a first-come, first-serve basis. Whoever gets to the station first gets the gasoline. Those who prefer to sleep late may find there is no gasoline when they get to the station. This type of approach may lead to long lines and excessive waiting by buyers. If you are willing to wait in line for, say, an hour or two, you will get gasoline at $1.25 a gallon. But the true cost of buying the gasoline is much higher than that since you incur an opportunity cost from waiting in line. The value of your time spent waiting in line has to be added to the price of the gasoline to get the true cost of the gas to you.
Stations may resort to rationing by selling to their best customers, however they determine who their best customers are. If the station owner doesn't know you, you may not get gas. Also notice that this allows station owners to discriminate against certain groups if they so desire. In a competitive market, discrimination by a seller in the form of not selling to particular groups has a price: lost sales. But in a shortage situation, sellers can choose not to sell to particular groups and still be able to sell all of their goods. This is because at the price ceiling, quantity demanded exceeds quantity supplied. So if sellers decide to discriminate against blue-haired economists with green glasses, they can do so. (I'm not one of them, however, so I'll get gasoline.)
A final possibility for rationing the scarce gasoline that is available is for the government to ration it. That's right, the same group that caused the shortage in the first place by implementing a price ceiling, is now going to attempt to fix the problem that it caused. The government would print up coupons (equal to the quantity supplied at the price ceiling, which is 400 million gallons in our example) and distribute them among the buyers. The big question is, of course, how does the government decide which buyers get gasoline and which do not? How about each licensed driver gets the same amount (say 5 gallons each per week)? But some people need more gasoline than others as the long-distance commuter, the traveling salesperson, etc. They clearly should get more while the retiree engaged mostly in pleasure driving should get less. That seems reasonable, perhaps, but notice the difficulty of actually determining priorities of need. Some type of government agency will be necessary to handle all of this-determining need, distributing coupons, enforcing it, etc. Note that scarce resources will be needed to implement the government rationing.
Another possibility exists: people will basically ignore the government and buy and sell gasoline at the equilibrium price ($1.50). This is what is meant by a black market-a good is bought and sold for higher than the legal ceiling price. While we don't like to encourage people to break the law, it is difficult to see why the government should prevent transactions in which people willingly engage. As long as the market is competitive and a buyer is willing to pay more than the legal price ceiling in order to have gasoline, should the government object?
Notice that the rationing and signaling functions of price are hampered by the use of a price ceiling. Because there is a shortage of the good, it needs to be rationed in some other way besides price and we have mentioned some possibilities. Also note that the good is not necessarily rationed to those buyers who have the highest-valued uses for the good. Someone who is willing to pay only $1.25 per gallon may get gasoline while someone else who values the good at $1.50 per gallon or more may not get any gasoline. This is inefficient. Relatedly, note that the efficient level of gasoline is not produced. At the price ceiling, 400 million gallons of gasoline are produced. But at that quantity, someone is willing to pay $1.75 for a gallon of gasoline while the marginal cost of the last gallon of gasoline is $1.25. Therefore, the marginal benefit exceeds the marginal cost at 400 million gallons per week. More gasoline should be produced. Also notice the signal that is being sent to sellers of gasoline. Since the price is being held down by the ceiling, we are, in effect, telling sellers to produce less gasoline even though there is a shortage of it.
You probably realize that I am not in favor of price ceilings and you are right. There is very little to recommend them. Perhaps in time of war or some other national emergency, a case can be made for them. Otherwise, it's probably best to forget about them.
Government may also intervene in markets to help the sellers by setting a price floor for a good. The idea of a price floor is that there is a minimum price for which a good will sell. The government will take steps to insure that the good does not sell for less than the price floor.
Figure 5.3 is a depiction of the market for corn. The current equilibrium price is $2.50 per bushel while the equilibrium quantity is 400 million bushels per month. Suppose the government places a price floor of $2.00 per bushel on corn (as shown by line Pf). This means that corn is not to be sold for less than this amount. What effect will the $2.00 price floor have on the market in Figure 4.4? Clearly none, since the current equilibrium price of corn is $2.50, which is above the price floor. So a price floor that is below the current equilibrium price has no effect on the market.
Figure 5.4 effectively reproduces the market for corn as depicted in Figure 5.3, where the equilibrium price of corn is $2.50 per bushel and the equilibrium quantity is 400 million bushels per month. Let the government set a price floor of $3.00 per bushel for corn, which, of course, is above the current equilibrium price of corn (this is shown by line Pf). At that price, the quantity demanded of corn is 350 million bushels per month while the quantity supplied is 450 million bushels per month. There clearly is an excess supply of corn at the price floor. Sellers wish to sell an additional 100 million bushels of corn per month over what buyers wish to buy. Suppose the government did nothing else at this point but pass a law saying it was illegal to sell or buy corn for less than $3.00 per bushel. Would the sellers be better off? As a group they clearly would be. Their total revenue from selling corn has increased from $1000 million per month ($2.50 times 400 million bushels) to $1050 million ($3.00 times 350 million bushels). This is because the demand for corn is inelastic so a price increase leads to more revenue.
While farmers as a group are better off because they have more revenue, some farmers would be worse off because they have corn that they are unable to sell. Instead of receiving $2.50 per bushel as before, some farmers now get nothing. They would not be very happy. Politically, it is evident that the government would not be able to enact the price floor and then do nothing else. So what will be done in practice is the government purchases the excess supply of corn at the price ceiling (100 million bushels per month in our example for a total of $300 million). This allows farmers to sell their entire quantity supplied at $3.00 per bushel-either in the market or to the government.
The government is now in the corn business. What could it do with all of the corn that it buys up? There are several possibilities. One is that it could store the corn. The corn could be saved for future use as, for example, in a drought. In such a time when market supplies would be low, the government could release corn from its own stocks to add to the market supply. This sounds prudent, though in practice, the government has only rarely been required to draw upon its surplus stocks. A second possibility is for the government to sell the surplus corn to other countries. We may not need it, but perhaps other countries will. That is quite possible, but at what price is the government going to be able to export the corn? Clearly not $3.00, the price it paid for the corn. Most likely, the government will sell the corn to other countries at $2.50 per bushel, the equilibrium price. So now the government is buying up corn at $3.00 per bushel and selling it to other countries for $2,50 per bushel. Who makes up the loss? Most likely, taxpayers will be required to pick up the tab. A third possibility is for the government to give the surplus away. This is a common way to dispose of surplus stocks. Various government giveaway programs and reduced price sales of food exist.
What kinds of signals are being sent to producers and consumers when price floors are used? Because the price is being held above equilibrium, consumers are being told to use less of the good while the higher price is a signal to producers to produce more of the good. Yet there is a surplus of the good at the price floor! Clearly, the reactions of buyers and sellers to a price floor are really opposite of what is efficient. To encourage sellers to produce more of a surplus good is a waste of resources. To induce buyers to economize on their use of a surplus good also makes little sense. Too much corn is being produced. At the price floor, buyers are willing to pay only $2.25 for the last bushel of corn, which has a marginal cost of $3.00 to produce. This is clearly inefficient. Therefore, most economists do not support price floors. Such floors may be well intentioned (to help certain groups which may need assistance), but their effects on markets are undesirable. If the goal is to help some group by increasing its members' income, it would be better to give direct cash assistance to the members than to cause inefficiency in markets by placing price floors on some goods. This is basically the approach the government takes now to the farm program. Target prices are set for various farm products and then deficiency payments are made directly to farmers equal to the difference between the actual price of a farm product (such as corn) and the target price.
Alternatives to a price floor would include either to try to increase the demand for the good or reduce the supply or some combination of the two. It can be seen that if either supply decreases or demand increases, the equilibrium price of the good will rise and perhaps lead to a situation where a price floor is no longer necessary. The government has tried to reduce the supply of farm products through various types of acreage allotment programs, soil bank conservation, etc. However, these have had limited effectiveness in reducing supply. Through advertising, farmers have attempted to increase the demand for various products-milk, beef, pork, etc. It is difficult to discern how effective these campaigns have been.
An additional way in which the government may affect the outcome of a market is through the use of a subsidy on a good. The purpose of a subsidy would be to increase the amount of a good that is purchased. If the government believed that some good is particularly meritorious or beneficial for consumers, it could use a subsidy on that good to induce buyers to purchase more of the good.
One way to subsidize a good would be for the government to send a check to buyers for every unit of a good that they purchase. Or it could give the subsidy to sellers in the form of a check for every unit that they sell. While there may be some slight differences in the final results, subsidizing the buyer or the seller gives a similar outcome. Since, typically, sellers are less numerous than buyers, it is usually easier for the government to give the subsidy to the seller.
Suppose the government decides to place a subsidy on compact disks (CD's). Since CD's give such clear, true sound, it is felt that consumers' ears will be better protected if they listen to CD's rather than other music modes. Therefore, the government decides to give a subsidy of $1.00 to sellers for each CD that they sell.
This situation is depicted in Figure 5.5 where S1 and D1 are the original supply and demand curves and $12.00 is the original equilibrium price of CD's and 1 million CD's per week is the original equilibrium quantity. A subsidy of $1.00 per CD is now given to sellers, how will this affect them? Basically, they are willing to offer more CD's at each price or, alternatively, they will accept a lower price for each amount sold than before. For example, at the equilibrium price of $12.00, sellers had been willing to sell 1 million CD's per week. If they now receive $1.00 from the government for each CD they sell, they would be willing to sell 1 million CD's per week at a price of $11.00 in the market since they would still get $11.00 + $1.00 (from the government) = $12.00 for the CD's. This would be true at each price so that the entire supply curve shifts down by the amount of the subsidy ($1.00) to S2.
It can be seen from Figure 5.5 that with the new supply curve, S2, the equilibrium price is $11.60 while the equilibrium quantity is 1.1 million CD's per week. Note, however, that the sellers of CD's now receive $12.60 (equal to the price of $11.60 plus the $1.00 from the government). Also note that the government pays out $1.1 million in subsidies each week for the purchase of CD's. We should also consider how the $1.00 subsidy was split between sellers and buyers. While the $1.00 per CD subsidy is given to the sellers, buyers receive part of the subsidy in the form of a lower price for CD's ($11.60 vs $12.00). Therefore, in effect, the sellers keep $.60 of the subsidy for themselves and pass on $.40 of the rest to buyers in the form of a lower price for CD's.
How should this subsidy be evaluated? Has it been effective? In this case, the answer is "yes." The consumption of CD's rose by 10% per week, which is in line with our desire to induce consumers to purchase more CD's. Note, however, in Figure 5.6, what would happen if the demand for CD's were rather inelastic at the current price of CD's. The equilibrium price and quantity are the same as before and we place a $1.00 subsidy on CD's as before. Now the equilibrium price falls by close to the full amount of the subsidy (down to $11.20 or by $.80) but the equilibrium quantity only increases by 20,000 CD's per week to 1.02 million. Therefore, quantity demanded increased by only two percent in response to a price decline of 6 2/3%. This is a rather extreme case, but it points up the fact that if a good with inelastic demand is subsidized, the subsidy will not be very effective in causing the quantity purchased to increase. But note that the government still is obligated to pay out $1.02 million per week ($1.00 per CD) in subsidies to CD sellers. Also note that the seller keeps only $.20 of the subsidy in this case and passes on the remaining $.80 to the buyers. So when demand is inelastic, the buyers get more of the subsidy through lower prices.
Furthermore, even if the subsidy is effective in increasing purchases of CD's (as in the first case), there are still some considerations relating to the desirability of the policy. First, resources are needed to implement and enforce the policy just as in the above cases of price floors and ceilings. But, perhaps more important, we should be concerned about using subsidies to encourage the purchase of goods. In our case, more resources will be needed to produce the additional CD's that are demanded. Those resources, of course, cannot then be used to produce other goods and services. This may or may not be desirable. Supply curve S1 in Figure 5.5 represents the marginal cost of producing CD's. Therefore, at the new equilibrium, consumers buy CD's for $11.60 each, yet the CD's cost $12.60 each to produce. This means $12.60 worth of resources are used to produce a good that some consumers value at only $11.60. That is inefficient. What it does mean is that the government should have some very good reasons before deciding to subsidize a good (e.g., saving people's ears). This would apply to our first case where demand is fairly elastic. When demand for a good is inelastic as in Figure 5.6, it is futile to even attempt to subsidize the good.
Subsidies are frequently given in our country through the tax system. For example, if you buy a home, most likely you will borrow the money from a financial institution (such as a bank) and pay thousands of dollars in interest over the life of the mortgage. However, any interest you pay for your home loan is deductible from your income tax and so can save you thousands of dollars in taxes. This amounts to a subsidy to you from the government for buying your home. Why should the government subsidize your purchase of a home? We will explore that and other questions on the bulletin board this week.
As will be explained in a later chapter, there are many types of taxes that are levied by government. One type of tax is considered in this chapter: excise taxes. These are taxes on specific goods. Excise taxes may be expressed as a percent of the price of the good (e.g., 10% of the purchase price) or per unit of the good ($5.00 per unit). Both types are used in the United States. For simplicity it is easier to consider excise taxes expressed per unit of a good. As before, an excise tax could be placed on either the buyer or the seller. For simplicity, excise taxes are usually placed on the sellers because there are fewer of them, making administration of the tax easier. However, while sellers may have the responsibility of handing over the tax revenue to the government, they may not necessarily pay the tax. That is, they may be able to shift the burden of the tax to the buyer.
There are two principal reasons for imposing an excise tax on a good. One is to raise revenue for the government. The second reason is to discourage the purchase of a good. To a certain extent, these two purposes are in conflict with each other.
Consider Figure 5.7 where S1 and D1 are the original supply and demand curves of CD's and $12.00 and 1 million CD's per week are the equilibrium price and quantity. Starting from the same equilibrium in the previous section, let's now impose a $1.00 per unit excise tax on CD's. For every CD they produce and sell, sellers must now pay the government $1.00. For now, presume that the primary goal of the tax is to raise revenue.
The $1.00 per CD excise tax has the effect of raising the costs of the sellers since they must now pay this tax in addition to their normal production costs. The effect of the tax is to cause the supply curve to shift upward to S2 by the amount of the tax, $1.00. The reason is that at the equilibrium price, e.g., of $12.00, sellers were willing to sell 1 million CD's per week. When the tax is imposed, they must now receive $13.00 if they are going to be willing to supply 1 million CD's as before ($13.00 minus $1.00 gives $12.00). So the supply curve has shifted up by $1.00 at that point and, for similar reasons, it shifts up by $1.00 at every other point on the curve.
This results in a new equilibrium at a price of $12.40 and a quantity of .9 million CD's (or 900,000) per week. Note that the sellers, however, only receive $11.40 after paying the tax to the government. This means that the consumers paid $.40 of the tax in the form of a higher price while sellers absorbed the other $.60. So part of the tax is shifted forward to consumers and part of it is paid by the sellers. Also note that the government receives tax revenue of $900,000 per week ($1.00 times 900,000 CD's). If the equilibrium quantity had remained the same, the government's revenue would have been $1 million per week.
Contrast the results in Figure 5.7 with the situation in Figure 5.8. Here the demand for CD's is quite inelastic at the current price. Starting from a price of $12.00 and quantity of 1 million CD's per week and original demand and supply curves of D1 and S1, it is seen that the $1.00 excise tax in this case causes price to rise to $12.80 while the quantity demanded falls only to .98 million CD's per week (or 980,000). Consumers pay most of the excise tax ($.80) and government revenue is greater than in the above case, $980,000 per week. Therefore, when demand is inelastic, more of the excise tax is passed onto consumers in the form of higher prices and government receives more revenue than when demand is more elastic as in the first case.
From the two cases, we see that if the major purpose of an excise tax is to raise revenue for the government, the tax will be more effective if it is placed on goods with inelastic demand. If the major purpose of the excise tax is to discourage consumption, the tax will be more effective if the demand for the good is elastic.
How to evaluate excise taxes? If the purpose is to raise revenue, then the taxation of goods with inelastic demand is appropriate. The tax will be fairly effective and it will not significantly affect the consumption of the taxed good. The only issue that might arise is fairness. Why should the buyers of a particular good (e.g., CD's) bear the burden of financing government? Should not the tax burden be spread more fairly among the general population such as through a tax on income? There is validity in such an argument.
As a means of reducing consumption, the use of an excise tax leads to a variety of questions. Will the tax be effective? As seen above, it will not be very effective if the good currently has inelastic demand. Should the government even try to reduce the consumption of a good? That is not an easy question to answer. So-called "sin" taxes are popular with some segments of the population. Examples would include excise taxes on cigarettes and alcohol. Many people believe that it is appropriate for government to try to reduce the consumption of such goods because of the harmful side effects from consuming them (poor health, drunken driving, etc.). Yet the purchasers of alcohol and cigarettes do get satisfaction from the use of such goods. Does society have the right to stand in judgment and tell individuals they should consume fewer cigarettes and less alcohol?
In our first example of excise taxes, there are people willing to pay $12.40 for another record album. An additional album costs only $11.40 to produce so efficiency would demand that more records be produced. But the imposition of the excise tax means no more albums are produced beyond the new equilibrium at 900,000.
In general, when government intervenes in a market, efficiency is usually sacrificed in favor of whatever goal the government is pursuing (more revenue, lower prices for buyers, etc.) The fact that efficiency is reduced by such intervention means government should have some very good reasons for changing market results through the use of price floors, excise taxes, etc. Otherwise, the benefits of government intervention are likely to be outweighed by the costs.
CHAPTER REVIEW QUESTIONS AND PROBLEMS
1. Identify the gainers and losers from the imposition of a price ceiling on gasoline. Be as specific as you can.
2. Who are the gainers and losers when a price floor is placed on a good such as corn? Be as specific as you can.
3. Explain why a price ceiling will cause a shortage of a good if the ceiling is below the equilibrium price of the good.
4. Explain why a price floor will cause a surplus of a good if the floor is above the equilibrium price of the good.
5. Why does an excise tax on good X raise less revenue for the government if the demand for good X is elastic rather than inelastic?
6. From the standpoint of efficiency, why are price ceilings and price
7. From the standpoint of efficiency, why are excise taxes and subsidies undesirable?
8. Suppose the state legislature decided to impose a price ceiling on fees at state universities in an attempt to keep the price of attending college low in Missouri. What might be the anticipated results of such a decision? How would a price ceiling affect the ability of state universities to provide education services? Be as specific as you can. Do you think this would be a good idea?
9. After the tragic events of Sept. 11, 2001, tourism fell dramatically in the US as people tended to stay home, partly out of fear and uncertainty and partly due to a recession. One proposal that was suggested at the time to help the tourism industry was to give people up to $1,000 tax credit for tourism related expenditures. This would be the same as giving people a subsidy for purchasing tourism services (hotels, transportation, dining at restaurants, etc.). Evaluate such a proposal. Do you think it would have been a good idea?
|
If the numbers 5, 7 and 4 go into this function machine, what
numbers will come out?
In this article for teachers, Elizabeth Carruthers and Maulfry Worthington explore the differences between 'recording mathematics' and 'representing mathematical thinking'.
In this investigation, you are challenged to make mobile phone
numbers which are easy to remember. What happens if you make a
sequence adding 2 each time?
This number has 903 digits. What is the sum of all 903 digits?
Look carefully at the numbers. What do you notice? Can you make
another square using the numbers 1 to 16, that displays the same
These sixteen children are standing in four lines of four, one
behind the other. They are each holding a card with a number on it.
Can you work out the missing numbers?
EWWNP means Exploring Wild and Wonderful Number Patterns Created by Yourself! Investigate what happens if we create number patterns using some simple rules.
Find the next number in this pattern: 3, 7, 19, 55 ...
Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see?
Can you design a new shape for the twenty-eight squares and arrange
the numbers in a logical way? What patterns do you notice?
There were chews for 2p, mini eggs for 3p, Chocko bars for 5p and
lollypops for 7p in the sweet shop. What could each of the children
buy with their money?
Winifred Wytsh bought a box each of jelly babies, milk jelly bears,
yellow jelly bees and jelly belly beans. In how many different ways
could she make a jolly jelly feast with 32 legs?
What is happening at each box in these machines?
There is a clock-face where the numbers have become all mixed up. Can you find out where all the numbers have got to from these ten statements?
You have 5 darts and your target score is 44. How many different
ways could you score 44?
The Scot, John Napier, invented these strips about 400 years ago to
help calculate multiplication and division. Can you work out how to
use Napier's bones to find the answer to these multiplications?
Investigate what happens when you add house numbers along a street
in different ways.
There are three baskets, a brown one, a red one and a pink one, holding a total of 10 eggs. Can you use the information given to find out how many eggs are in each basket?
Use your logical-thinking skills to deduce how much Dan's crisps and ice-cream cost altogether.
I throw three dice and get 5, 3 and 2. Add the scores on the three
dice. What do you get? Now multiply the scores. What do you notice?
Here are the prices for 1st and 2nd class mail within the UK. You have an unlimited number of each of these stamps. Which stamps would you need to post a parcel weighing 825g?
This problem is based on the story of the Pied Piper of Hamelin. Investigate the different numbers of people and rats there could have been if you know how many legs there are altogether!
Arrange the numbers 1 to 6 in each set of circles below. The sum of each side of the triangle should equal the number in its centre.
Where can you draw a line on a clock face so that the numbers on
both sides have the same total?
Woof is a big dog. Yap is a little dog.
Emma has 16 dog biscuits to give to the two dogs.
She gave Woof 4 more biscuits than Yap.
How many biscuits did each dog get?
The clockmaker's wife cut up his birthday cake to look like a clock
face. Can you work out who received each piece?
Ten cards are put into five envelopes so that there are two cards in each envelope. The sum of the numbers inside it is written on each envelope. What numbers could be inside the envelopes?
Well now, what would happen if we lost all the nines in our number
system? Have a go at writing the numbers out in this way and have a
look at the multiplications table.
Can you make square numbers by adding two prime numbers together?
Add the sum of the squares of four numbers between 10 and 20 to the
sum of the squares of three numbers less than 6 to make the square
of another, larger, number.
Can you put plus signs in so this is true? 1 2 3 4 5 6 7 8 9 = 99
How many ways can you do it?
Find out what a Deca Tree is and then work out how many leaves
there will be after the woodcutter has cut off a trunk, a branch, a
twig and a leaf.
Annie cut this numbered cake into 3 pieces with 3 cuts so that the
numbers on each piece added to the same total. Where were the cuts
and what fraction of the whole cake was each piece?
There are 4 jugs which hold 9 litres, 7 litres, 4 litres and 2
litres. Find a way to pour 9 litres of drink from one jug to
another until you are left with exactly 3 litres in three of the
An environment which simulates working with Cuisenaire rods.
The value of the circle changes in each of the following problems. Can you discover its value in each problem?
Can you score 100 by throwing rings on this board? Is there more
than way to do it?
Can you work out how many flowers there will be on the Amazing Splitting Plant after it has been growing for six weeks?
Katie had a pack of 20 cards numbered from 1 to 20. She arranged
the cards into 6 unequal piles where each pile added to the same
total. What was the total and how could this be done?
How could you put eight beanbags in the hoops so that there are
four in the blue hoop, five in the red and six in the yellow? Can
you find all the ways of doing this?
On the planet Vuv there are two sorts of creatures. The Zios have 3 legs and the Zepts have 7 legs. The great planetary explorer Nico counted 52 legs. How many Zios and how many Zepts were there?
Write the numbers up to 64 in an interesting way so that the shape they make at the end is interesting, different, more exciting ... than just a square.
Find at least one way to put in some operation signs (+ - x ÷)
to make these digits come to 100.
There were 22 legs creeping across the web. How many flies? How many spiders?
Tim had nine cards each with a different number from 1 to 9 on it.
How could he have put them into three piles so that the total in
each pile was 15?
Explore Alex's number plumber. What questions would you like to ask? What do you think is happening to the numbers?
Skippy and Anna are locked in a room in a large castle. The key to that room, and all the other rooms, is a number. The numbers are locked away in a problem. Can you help them to get out?
Tom and Ben visited Numberland. Use the maps to work out the number
of points each of their routes scores.
What happens when you add the digits of a number then multiply the
result by 2 and you keep doing this? You could try for different
numbers and different rules.
Can you find which shapes you need to put into the grid to make the
totals at the end of each row and the bottom of each column?
|
Beacon Lesson Plan Library
Clips, Cards, Rocks and Rulers
Bay District Schools
Students work in pairs to use standard and non-standard tools to measure classroom objects. Partners compare data and respond to a journal prompt that provides application to real-world situations.
The student knows that a uniform unit is needed to measure in real-world situations (for example, length, weight, time, capacity).
- [How Big is a Foot], Rolf Myller, Econo-Clad Books; ISBN: 0833568531; (October 1999)
- Data Chart, one copy per student (see associated file)
- Standard ruler, one per student
- Small objects for non-standard measurement, one for each child with a few extra for choice (example; paper clips, playing cards, used pencils, chalk, rocks, counters, sticky notes, straws, crayons, erasers)
- Math Journals
1. Acquire a copy of the book, [How Big is a Foot] by Rolf Myller,
2. Gather materials for standard and non-standard measurement activity. rulers, paper clips, playing cards, used pencils, chalk, rocks, counters, sticky notes, straws, crayons, erasers
3. Make copies of the data chart for each student.
1. Ask the students, “ How big is a foot? Does it matter that everyone’s foot is not the same size?” Allow time for discussion. Then ask the students to listen to a story that tells about a time when the size of feet caused big problems. Read the book, [How Big is a Foot?] By Rolf Myller as an introduction to standard and nonstandard units of measurement.
2. Discuss problems from the book that were caused by non-standard measurement tools. Have students name several items that could be used in the classroom as non-standard measurement tools. Identify a ruler as a standard measurement tool.
3. Explain to the students that they will be measuring objects in the classroom using standard and non-standard tools. Demonstrate to remind students that correct measurements are made when the left edge of the tool is lined up on the left side of the object being measured.
4. Distribute activity materials. (rulers, non-standard tools, chart for recording measurement from associated file)
5. Pairs of students will choose 5 objects in the classroom to measure. Each student will measure the objects using a standard tool (rulers). Next, the students will measure the same 5 objects using a variety of non-standard tools (paperclips, pencils, erasers). Record the measurements on charts.
6. Encourage the partners to compare the data on the charts. Ask guiding questions to help students clarify the information. For example; Did you both gather the same data when measuring like objects with rulers? Did you both gather the same data when using non-standard measurment tools? How was your data different? How was your data alike? Can you think of a time when you would need to have the exact same measurement as your partner?
7. Pass out math journals. Write prompt on the board and read it to the class. (Prompt: You have worked with a partner to measure items in our classroom using standard and nonstandard tools. Tell your reader how standard units of measurement are needed in real-world situations.) Students respond in the journals.
8. As students complete journal assignments, circulate and offer formative feedback.
Note: This lesson instructs and assesses use of standard and non-standard measurement of length only. Use completed charts to formatively assess student’s ability to use standard and non-standard tools to measure objects. Conference with students and formatively assess the journal response.
|
THE LAW OF COSINES
WE USE THE LAW OF COSINES AND THE LAW OF SINES to solve triangles that are not right-angled. Such triangles are called oblique triangles. The Law of Cosines is used much more widely than the Law of Sines. Specifically, when we know two sides of a triangle and their included angle, then the Law of Cosines enables us to find the third side.
Thus if we know sides a and b and their included angle θ, then the Law of Cosines states:
(The Law of Cosines is a extension of the Pythagorean theorem, because if θ were a right angle, we would have c2 = a2 + b2.)
Example 1. In triangle DEF, side e = 8 cm, f = 10 cm, and the angle at D is 60°. Find side d.
Solution.. We know two sides and their included angle. Therefore, according to the Law of Cosines:
d2 = e2 + f2 − 2ef cos 60°
d2 = 82 + 102 − 2· 8· 10· ½, since cos 60° = ½,
d2 = 164 − 80
d2 = 84.
d = .
Problem 1. In the oblique triangle ABC, find side b if side a = 5 cm, c = cm, and they include and angle of 45°. No Tables.
To see the answer, pass your mouse over the colored area.
b2 = a2 + c2 − 2ac cos 45°
= 52 + ()2 − 2· 5· · cos 45°
= 25 + 2 − 10· · ½, since cos 45° = ½,
= 25 + 2 − 10, (· = 2)
b = cm.
Problem 2. In the oblique triangle PQR, find side r if side p = 5 in, q = 10 in, and they include and angle of 14°. (Table)
r2 = 52 + 102 − 2· 5· 10 cos 14°
= 25 + 100 − 100(.970), from the Table.
= 125 − 97
r = in.
Example 2. In Example 1, we found that d = , which is approximately 9.17.
Use the Law of Sines to complete the solution of triangle DEF. That is, find angles E and F.
Therefore, on inspecting the Table for the angle whose sine is closest to .944,
Angle F 71°.
And so using the Laws of Sines and Cosines, we have completely solved the triangle.
Proof of the Law of Cosines
Let ABC be a triangle with sides a, b, c. We will show
c2 = a2 + b2 − 2ab cos C.
(The trigonometric functions are defined in terms of a right-angled triangle. Therefore it is only with the aid of right-angled triangles that we can prove anything)
Draw BD perpendicular to CA, separating triangle ABC into the two right triangles BDC, BDA. BD is the height h of triangle ABC.
Call CD x. Then DA is the whole b minus the segment x: b − x.
x = a cos C . . . . . . . (1)
Now, in the right triangle BDC, according to the Pythagorean theorem,
h2 + x2 = a2,
h2 = a2 − x2. . . . . . (2)
In the right triangle BDA,
c2 = h2 + (b − x)2
c2 = h2 + b2 − 2bx + x2.
For h2, let us substitute line (2):
c2 = a2 − x2 + b2 − 2bx + x2
c2 = a2 + b2 − 2bx.
Finally, for x, let us substitute line (1):
c2 = a2 + b2 − 2b· a cos C.
c2 = a2 + b2 − 2ab cos C.
This is what we wanted to prove.
In the same way, we could prove that
a2 = b2 + c2 − 2bc cos A
b2 = a2 + c2 − 2ac cos B.
This is the Law of Cosines.
Please make a donation to keep TheMathPage online.
Copyright © 2014 Lawrence Spector
Questions or comments?
|
Graphing Lines using a Table of Values - Concept 33,056 views
When first introduced to graphing lines, we often use a table of values to plot points and connect them. There are several other methods of graphing lines, including using a point and the slope. Sometimes graphing lines using an equation involves the same methods as using a table of values. Since we graph lines in the coordinate plane, it is necessary to understand how to connect graphs, tables and equations.
There are different methods you could use to
go from an equation to a graph.
Making a table is one of the most fundamental
or basic level strategies for
graphing a line.
And, by the way, making a table will work
when you start moving through your
high school career and getting into other
types of equations like curves.
You can make a table for them also.
But let's check it out for lines.
When you want to graph a line by making a
table, keep in mind any point represents
a solution to the equation.
And here's what I mean.
Every point has an X number
and a Y number. When I input an X number into
my equation,my Y value's my output.
So any point that's on the line is
a true solution to that equation.
That becomes useful when
you're making a table.
Tables usually look like this.
You can either make them horizontally or
vertically like that, if you want to.
Totally up to your preference.
And what you do is you choose
any X numbers you want to.
It's usually a good idea to use some negative
values in addition to some positive
values. And then what you do is one by one you're
going to substitute these X numbers
into your equation as inputs to find
your corresponding Y value output.
Then each one of these is going to turn
into a point on your graph and you'll
just connect them using a ruler.
One thing I want to make sure I point
out to you guys before you start this
process is that you want your
points to be ruler-straight.
Here's what I mean.
Let's say I get my points on there and
they kind of look like this and I have
one that's kind of like out there.
Well, these three are straight.
So that's probably what
the line looks like.
But this point, I probably made an error.
If I got three that are perfectly lined
up, they're ruler-straight and I used
a ruler to draw them and I have this
point, that's just like a little
bit off, chances are I made
an error in my table.
So go back to your table and make corrections.
Most people tend to do at least three
points in their table to start with.
I would recommend for your first tables
you start doing, you start by doing
about five points.
And, again, you're going to substitute your
X numbers in to find your Y values
and then put those
dots on the graph.
They should make a ruler-straight line.
|
Enter a grandparent's name to get started.
The articles of removal of the 1830 Treaty of Dancing Rabbit Creek were set into motion immediately. By 1831 and 1832 when Removal was in full force mixed bloods still maintained their positions of trust and authority within the tribe. During Removal the percentage of mixed-blood captains — the headmen and leaders of the organized emigrant bands bound for the new Indian nation -was greater than their percentage within the overall population of the tribe (see Chart 22). Their understanding of the English language and the ways of Americans became even more valuable as the bands of emigrants made their way into western Arkansas and present day Oklahoma. As the emigrants reported to the government agents west of the Mississippi River a note of each arrival was entered in a journal in order to establish eligibility for the year’s supplies granted in the Treaty of Dancing Rabbit Creek (see Database of Choctaw Mixed Blood Names).
Prior to the emigrants’ departure from Mississippi federal officials had conducted a census to ascertain not only a population count but also to obtain available information about individual land holdings and improvements. Popularly called the Armstrong Roll, this census indicated sizes of families and also identified some mixed bloods (see Database of Choctaw Mixed Blood Names). The Armstrong Roll also contained the geographical locations of the Indians, documenting the fact that many mixed bloods and full bloods either lived together in the same or adjoining household or as close neighbors, while others lived in communities made up exclusively of full bloods or exclusively of mixed bloods.1 The full blood Indian wives of countrymen and mixed bloods preferred living near their full blood Choctaw relatives if possible. Since their husband’s influence and power often stemmed from the wife’s family connections, which were enhanced by proximity, the family unit normally remained near the traditional home.
This pattern eventually did change as some mixed bloods moved from the traditional home grounds to better farm and grazing lands. But even at the time of Removal many families, such as the mixed-blood Andersons, Juzans, and Favres, remained on their old holdings. There also was a large degree of family interaction between mixed bloods and full bloods as they participated in joint ventures. The many anecdotal accounts of Cushman, J.F.H. Claiborne, Halbert and Gaines indicate that mixed blood rode alongside full blood during the Tecumseh incidents in 1811, the Creek War, and the Jackson campaigns at Pensacola and New Orleans in 1814 and 1815. Most mixed bloods and full bloods were in agreement at the signing of the Treaty of Dancing Rabbit Creek, and on into Removal there was no exclusive division of powers and influence along quantitative bloodlines. Both full bloods and mixed bloods held high positions. In fact there is a long and documented history of cooperation, co-existence, and cohabitation between what outsiders have viewed as two separate groups. Most of the writings of the mixed bloods before and after removal indicate their strong identity, not as mixed bloods, but as Choctaw.
In earlier days a cultural chasm did exist between some mixed bloods and full bloods within the Choctaw Nation. Where the boundaries of two societies met, a dynamic, acculturative synthesis occurred. From the earliest periods– when traders settled in the tribe with their obvious practical advantages of foreign language and trading acumen to the exodus from Mississippi — a constant, gradual process of cultural syncretism changed the several peoples coexisting in Choctaw country into a more homogenous whole. Full blood Indians learned some French, Spanish, and English words and customs, while white countrymen acquired a taste for corn and ball games along with Choctaw phrases and Indian wives. Many Indian customs such as binding infant’s heads to achieve fattened shape which was considered particularly desirable, infanticide, lex talionis, scaffolding the dead then “bonepicking” the remains of the deceased, and other practices either disappeared or were altered through pressure from missionaries and white or mixed-blood relatives. The tribe was gradually adopting some western ways.
By the time of Removal the culture of the Choctaw Indians already changed appreciably from that found by DeSoto in the sixteenth century. A major change lay in the number of mixed bloods contained within the tribe, which conservatively can be estimated at around fifteen to twenty percent, and liberally at over thirty percent. Should that count seem high, it is wise to remember that Jedidiah Morse in his 1822 report on Indians stated that the Cherokee nation “by actual enumeration of the Agent in 1809, was 12,395 Cherokees, half of whom were of mixed blood…”2 Although the number of mixed bloods in the Choctaw Nation was not as high as in the Cherokee Nation, it was high enough to cause major cultural changes at a time when the tribe was experiencing great stress from the forces in favor of Indian Removal.
The diplomatic disposition of the Choctaw people which permitted them calmly to accept cultural evolution and change, and the growing number of mixed-blood tribal members who quickly grasped the idea that the American government’s policies of “civilization” could enhance Choctaw self-sufficiency and wealth, made for consistent relations from Washington’s administration through Jackson’s nearly half a century later. A major factor during this time was the changing ratio of Indians to white settlers on the Southern frontier, especially along the Mississippi River and most particularly around New Orleans and the Gulf region. As long as the settlers remained in the minority the American administrations followed a policy of pacification of the Indians through liberal trade agreements and monetary reward for those headmen and chiefs willing to accommodate the Americans. And it was not simply a matter of American officials tempting the Indian “children” with bribes; the cultural practices of the Southern tribes strongly favored acceptance of those traders and civil officials (including the earliest Frenchmen, Spaniards, Englishmen through the American commissioners in the 1820s) who offered gifts.
Enter a grandparent's name to get started.
Although it is easy for modern commentators to affix moral labels to the gift-giving which almost always accompanied any major trading session or treaty talk, the practice was one for which the Indian tribesmen were as much responsible as were the Europeans who used the accepted Native American practice to effect trade agreements and military alliances. To assume that the tribal leaders were gulled into unfair pacts is to embrace the most crass sort of ethnocentrism which insults the intelligence and civilization of the Indians in question.
The fact that the Choctaw tribe existed in harmony and peace with the European occupiers of their territory from the early days of the eighteenth century through the first third of the nineteenth century is a tribute to their diplomatic acumen and maturity. There is little doubt that the tribe slowly lost territory to the several foreign governments which held the Gulf region, but it is also a fact that the tribe only relinquished peripheral lands upon which it had only tentative claims until the very last when it ceded its heartland at the Treaty of Dancing Rabbit Creek in 1830. In all, that is an enviable record when compared to the fate of Indian populations earlier in Mexico and South America or those plains Indians in the United States after the Civil War; both of the latter groups were overwhelmed in short order by superior military technology and strength. The Choctaw chose diplomacy over war and thereby lengthened their existence and identity as a people.
The tribe’s toleration of white traders also eased the friction which always exists between distinct and separate cultures in early contact with each other. The intermarriages and resulting mixed-blood progeny only made the bonds of identity and similarity stronger as the traders’ children were welcomed into the tribe. It is at this point, when an easily acculturated group of mixed bloods began to appear within the tribe, that some historians manufacture a schism between this new “breed” of Indian and their full-blood relatives. Except for a few isolated cases of family friction, which is common enough even in un-mixed societies, the record is silent in regard to any such schism. Instead, the extant evidence shows co-existence and community between these peoples.
Some students of American Indians have retroactively constructed a post-removal western social stigma towards mixed bloods, which simply did not exist to any measurable degree during the eighteenth century among the Choctaw Indians. In fact one finds a very different Indian culture in the trans-Mississippi west, than in the cis-Mississippi region prior to Removal. In this case the image of the pre-Removal Choctaw tribe would be viewed through quite distorted lenses and could easily be misinterpreted.
As the number of Choctaw mixed bloods grew so did the ease with which the United States officials were able to introduce “civilizing” programs to the tribe. This study shows that during treaty talks often it was the mixed-blood advisors to the chiefs who recognized the commercial attraction of such things as cotton gins and iron works. As early as 1802 a cotton gin existed in Chickasaw country (at the site of Cotton Gin Port) and expectations were high among the Choctaw mixed bloods that they might obtain the same for their use.3 It was also the mixed bloods who early in the nineteenth century desired schools, and therefore the missionaries who came with them. In nearly every case of Choctaw acceptance of Jefferson’s proffered tools of civilization, one finds mixed bloods at the head of the line. Of course Jefferson’s program for the Old Southwest included much more than mere Indian pacification and assimilation. To a much greater degree he recognized the palpable weakness of the Southern frontier and acted on several fronts to strengthen it. On the diplomatic front he entered into negotiations with the Spaniards who controlled the Gulf Coast and the Mississippi River and later, after extended diplomacy, was able to buy Louisiana from France. On the home front he was keenly aware that the frontiersmen had to provide their own defense and mapped out a plan of prudent purchases of strategic Indian lands along the Mississippi River in order to make lands available for settlement. This influx of settlers, he opined, would put in place a militia able to defend itself and the Mississippi River valley from any foreign enemy.4
The Choctaw mixed bloods were an important element allowing the partial acceptance of Jefferson’s policy in the Indian lands of Mississippi Territory. They were given a Jeffersonian-Republican Indian Agent in Silas Dinsmoor from 1802 until the War of 1812, and one of their major interpreters, the countryman John Pitchlynn, also espoused the American ideology or cause. He is on record as being antagonistic to the emigrant Tory royalists who fled to Indian country during and after the American Revolution. He had family ties to Georgia and the Carolinas, and he was in agreement with nearly all requests made by the United States of the tribe. His sons later, after the Creek War, were very instrumental in helping American treaty commissioners persuade and cajole more and more territory from their tribal kinsmen. They communicated with Andrew Jackson from time to time and were in agreement with the overt Jeffersonian policy of acquiring land for settlement and militia purposes.
As a result of this pro-American sentiment among leading mixed bloods who had strong family ties to tribal chiefs, the tribe itself remained friendly to the United States and fought alongside American militiamen throughout the Gulf theatre of the War of 1812 and the Creek War which was encompassed by the larger conflict. The long standing American policy of amity had produced dividends from the Choctaw and their cousins the Chickasaw.
During this period when Jefferson actively pursued cessions from the Choctaw, he was quite sensitive to the possibility of offending them through too blatant an approach and ordered his functionaries to use tact and diplomacy in their dealings with the tribe. Although he has been accused of following an extortive policy of running the Indians into debt to force cessions, there is no record that such a practice followed among the Choctaw tribe. Instead there exists much evidence indicating that the Choctaw factory, one of the largest trading houses, was operated under strict business restraints and only offered credit in small amounts and then only to reputable individuals. Jefferson plainly instructed that the factories were to be operated as non-profit organizations meant mainly to pacify the Indians they served and to further interdict Indian trade with foreigners below the thirty-first parallel in West Florida. The few debts that were allowed by the Choctaw factors were never used as leverage in any treaty talks. The only time debt is mentioned is in regard to foreign traders or when the tribe itself asked that debt be excused or otherwise ameliorated. There is even a sense of government reluctance in the various treaty talks to forgive any Choctaw his individual debts. The debts were fairly equally shared by full blood and mixed blood alike, with no noticeable favoritism being practiced by the factors.
The leading mixed bloods often were traders themselves. Some, such as the venerable Ben James, had earlier been aligned with the Spanish and British trading houses out of Pensacola and Mobile. Others, such as John Pitchlynn, Jr., entered into Indian trade later when the government trading houses were terminated by legislation in the early 1820s. This mixed-blood propensity for trade and
business was a major reason of their importance to the tribe, for most of the tribe were strongly attracted to American and European trade goods. Most tribal traders were considered wealthy and honorable husbands for Choctaw women from influential families.
But trade and economics were not the only forces operating on the tribe during this time. Between 1786, when the tribe recognized the United States as its main political partner at the Treaty of Hopewell, and the Removal of the 1830s, several other important factors also operated upon the tribe. The entire period was a time when great social change was sweeping Europe in the form of the French Revolution and the Napoleonic Wars. The tribe certainly felt the winds of change as the Spanish power in the region was weakened to the point of ineffectiveness; their French allies of a half-century sold Louisiana and then retired from the local military and diplomatic field. After the British presence was neutered by Jackson’s defeat of Packingham at New Orleans in 1815, the Choctaw chiefs did not need augury and superhuman perception to understand that the United States was emerging as the primary political and military power in the region. The self-evident fact was buttressed by the chiefs’ own perspicacity and the wise counsel of their mixed-blood kinsmen.
Far from being immature children who did not really understand the complexities of modern international tensions, the chiefs were experienced men who viewed the American political structure first-hand when they visited Jefferson in Washington in 1804 and later in 1825. Some of the same individuals were on both trips and certainly recognized the changes of economic growth and demographics, which had transpired during the two decades. Some of the children of leading mixed bloods and full bloods attended academies and schools outside the nation and returned to explain that world to their brothers and kinsmen back home. The mixed bloods and full bloods sensed the futility of actively opposing American expansion and generally resigned themselves to eventual assimilation or Removal.
Added to these considerations were the teachings of the missionaries and their admonishments to do things “the American way.” Although they entered the area late and did not become a major force until the 1820s, the missionaries acted as a catalyst for change within the tribe. As each ancient practice, such as scaffolding the dead, was eradicated, the tribe became less Choctaw and more American. As each Bible verse was learned and each English lesson mastered, the tribe as a distinct Indian culture became less viable. All of the old ways not in consonance with the missionaries’ concept of the Christian ethic were slowly and steadily silenced. The practices of witch killing, infanticide, lex talionis, rainmaking, and so on, all succumbed to “civilizing” pressures. At a time when whites had not yet entered into heated debates over their own practices of abortion, mercy killing, and faith healing, the old Indian ways seemed quite primitive. But one bright event still shines out of the missionary effort, that of creating an alphabet and syllabary of the Choctaw language. Originally intended to ease the task of forcing the Christian Bible upon the non-English speaking tribesmen, the endeavor resulted in preserving the language for posterity and thus saved a crucial facet of Choctaw culture and identity. The Christian religion, like the other forces acting upon the tribe, enjoyed the dual role of destroyer and preserver.
Of all the factors operative upon the tribe only the ideological and political pressures rivaled economics. As the mixed bloods ascended to position of leadership within the three tribal districts in the state of Mississippi in the aftermath of the Creek War, they brought their mixed ideological views with them. They understood and respected old tribal ways, but they also favored changes such as the election of chiefs by the leading men of each district along the lines of the American electoral system, envisioned and created the Choctaw Lighthorse police force along militia lines and began the enforcement of laws by this more European and American method of social control, and moved in the direction of unifying the three autonomous districts of the Choctaw Nation into one democratic chiefdom.
The mixed bloods were not alone in their desire to “modernize” Choctaw ways. Even the vaunted full-blood chief Mushalatubbe once considered throwing his hat into the white political ring and running for a state legislative office. But it was the mixed bloods who found it easiest to adopt American ways. After Removal, Greenwood Leflore did become a Mississippi legislator and prosperous cotton planter, thus realizing the Jeffersonian goal of evolving from Indian to yeoman farmer and beyond.
This changeover was not without its conflicts and intertribal tension. These two chiefs, Leflore and Mushalatubbee, along with Nituckachee, Little Leader, and others, vied for some degree of control of the tribe. Although some historians simplify the struggle as a racial one between mixed and full bloods, correspondence and records indicate that the misunderstandings were regional and sectional in nature. A major misreading of history occurs when historians attempt to moralize about the clash between Leflore and Mushalatubbe in the late 1820s, some siding with Mushalatubbee’s “strong” stance against Leflore’s “despotic” machinations. Both men fall far short of hero status, yet certainly cannot be called villains. Both pursued what they considered to be noble aspirations; both were also guilty of cupidity and self-interest. In essence, they were human. Mushalatubbee seemed much more concerned about his personal pension and land for his relatives and friends than about the disappearing Choctaw culture. Leflore was not shy about seeking power, but after the Treaty of Dancing Rabbit Creek, which he helped orchestrate, he was stung by the tribal reprimand, which demanded his resignation in October 1830. His ouster from office was perhaps the most patently unifying act the tribe affected prior to Removal and foreshadowed later unifying efforts in Indian Territory.
The sectionalism existing within the Choctaw tribe in the early 1800s also underwent a period of nationalism remarkably parallel to the white American and European experiences of the time. Although much attention has been directed to the fuss and bluster of such Choctaw sectional disputes as one section sending more students to Kentucky than did another, or how the treaty annuities should be divided, the rise of Leflore to position of Chief of all the Choctaws was a very important event. It demonstrated that the leading mixed and full bloods could compromise and reach common decisions beyond sectional interests. It a1so helped set the stage for Removal, but that should be viewed in the light of the near certainty of Removal by the late 1820s. Better to say that Removal occurred in spite of Choctaw unification rather than because of it. Of course when negotiations began, the sectional interests of the several chiefs were quite pronounced, but Leflore’s earlier identity as chief helped him sway the other leading men into accepting the treaty. Choctaw unification thus paved the way for the speedy passage of the Treaty of Dancing Rabbit Creek. The pressures for removal were so great at the time- from the state of Mississippi, from the federal government, from many tribal members (mixed and full blood), and even from some missionaries, especially the Methodists- that a treaty would have been soon effected even in the absence of a Leflore. Removal was an event brought about by complex forces operating over decades, not the sellout of one individual, or clique, in the signing of one treaty.
Considering that the many forces behind Removal had been building for years, it is also a bit simplistic to conjure up a super villain in the person of Andrew Jackson and point to him as the supreme nemesis of the American Indian. The Choctaw nation venerated Jackson and looked to him for fair play and justice before and during Removal.5 Jackson was certainly less than infallible in his Indian policy, but he always referred to the Choctaw Indians in the kindest terms as allies on the field of battle. Jackson’s image suffers from the alcoholic malfeasance of Choctaw Agent William Ward whose failure to register properly those tribesmen desiring to remain on their homesteads in Mississippi led to what was perhaps the most inhumane aspect of Removal, the loss of land by the several thousand Choctaws wishing to remain legally on their ancestral lands.
There seems to be some pervasive element in the human persona, which drives it to single out “great men” and “villains” as prime movers in historic events; yet in Indian Removal it was really the American government acting on a longstanding American policy, which led to the Removal of the Southern Indians. No deviate or perverse personality was the actuator of those deeds the country would later view as distasteful. Analysis of the events of the time suggest that if Andrew Jackson had never been elected president, another frontiersman or westerner would have been elected in his stead because the populace of the country wanted that kind of president to represent their views in Washington. Almost any westerner elected to the presidency in the late 1820s would have yielded to pressures from the growing South and West to bring about Indian Removal; it was what the average citizen there wanted.
The arguments for and against Indian Removal in the United States mainly were geocentric; most of the opposition to Removal came from Northeastern spokesmen and legislators who used idealistic moral issues to pursue pragmatic regional arguments. Growing Southern and Western populations reduced the relative importance and political power of other regions of the country. When Mississippi and Alabama evicted their resident Indian tribes there was an economic boom as land speculators poured into the region bringing the chronicled “flush times” with them. As the prime cotton lands in the Delta region of northwest Mississippi, and the Black Belt region of northeast Mississippi and northwest Alabama, became available to planters, the fabled antebellum South took shape, slavery rapidly spread across the former Mississippi Territory, and the storm clouds of the Civil War began gathering on the horizon. Indian Removal therefore can be viewed as an integral part of the surge in the slave-based Southern plantation economy. It can also be viewed as part of a regionally dichotomous rhetoric, which led to, increased sectionalism.
The Choctaw tribe mainly sympathized with the Southern defense of slavery and condoned the “peculiar institution” itself. Many leaders of the tribe owned slaves and the practice continued within the tribe after Removal; by the time of the American Civil War the tribe found itself as much Southern as Indian and became embroiled in the hostilities. Indeed the values and politics of the Choctaw tribe after Removal can just as easily be viewed as Southern as Indian. And a similar statement can be made about many white Southerners who had more than a few Indian cultural traits.
Many Mississippians and Alabamans retained, to a greater or lesser degree, an “Indianess” which persists to the present day. In other words, Choctaw culture to some extent also diffused into the culture of the early white settlers in the Tombigbee River watershed as well as those in the Pearl River watershed, and can be yet discerned in the rural Mississippi and Alabama border area. The cultural diffusion was accelerated by the many intermarriages into the white families of the day and the many mixed bloods who stayed in their traditional homelands after Removal as “white” Mississippians. The several thousand full blood Choctaw remaining in Mississippi after Removal helped continue this cultural exchange via their connections with their mixed-blood cousins.
Folks in the area today still roam the springtime woods in search of the seasonal mayhaw, the dewberry, the Chickasaw plum, and later in the year the huckleberry and wild muscadine grape. Spring is still greeted with a traditional burning of the woods, and the planting of corn is so necessary a part of the rural South that some white farmers grow it just to give it away in a manner more ritualistic than altruistic. Hunting season is also a time of ritualistic preparations when rural and some urban Southern young men take to the forests to track the deer and turkey which have for centuries filled the larders of the people living in the region. One can still find the old-fashioned Indian style homestead yards where every blade of grass is plucked and the sand and dirt swept clean in order to prevent snakes and vermin from entering the home. And then there’s that ubiquitous high-check-boned Southerner who appears from Texas to Georgia and readily tells any and all comers that his great-great grandmother was a Cherokee princess. Even though such evidence is highly anecdotal, once one begins looking for it, the “Indianness” of the South is overwhelming.
The further one traces one’s roots back into time in the South, the greater the chance that the records will prove one’s ancestors were mixed bloods. Given the history and fireworks of race relations in the South, the degree of “Indianness” becomes even more fascinating.6 In view of these conclusions it is easy to see that Indian history cannot be treated as an isolated, interior occurrence separate from the pressures and events in American and world history. Indian history is an integral part of American and international history and was driven by the same events occurring in the broader arena of world affairs. The enduring message the Choctaw people of the eighteenth and nineteenth centuries have left us is that they were survivors, not victims.
This journal can be found on microfilm in the Lackey Collection, McCain Library, University of Southern Mississippi. It is distinct from the earlier Armstrong Roll. ↩
Jedidiah Morse, A Report On Indian Affairs, (New Haven: S. Converse, 1822, reprint ed., New York: Augustus M. Kelley, 1970), 152. The author has not verified these figures mentioned by Morse, but has noted a large amount of mixed-blood identifications in early Cherokee censuses. ↩
For a detailed description of Indian cotton growing, see Daniel H. Usner, Jr., “American Indians on the Cotton Frontier: Changing Economic Relations with Citizens and Slaves in the Mississippi Territory,” Journal of American History, 72 (Sept 1985), 297-317. ↩
Dumas Malone in his in-depth six volume study of Jefferson only touches briefly upon the president’s efforts to structure a militia defense on newly acquired lands, but in Jefferson the President: First Term, 1801-1805, p. 514, he does acknowledge the work of Mary P. Adams in “Jefferson’s Reaction to the Treaty of San Ildefonso,” Journal of Southern History, 21 (May 1955), 173-188, as a needed correction to a historical misunderstanding of Jefferson’s defense policies. ↩
Modern Mississippi Choctaw still use a replica of the drum given the tribe by Jackson as a token of his appreciation after their aid in the War of 1812 in their ceremonial parades prior to re-enacting the ancient ball play, and specifically tell their children that the drum was a gift from Jackson. It is interesting how such rituals transcend even scholarly efforts to correct the “truth” about Jackson. ↩
See Appendix A (Database of Choctaw Mixed Blood Names) for the rare documented instances of Indian/Black mixed bloods. Often the records that are available to indicate white-red blood mixes are lacking for red-black mixes though such combinations did exist. In modern times, when documentation of red-black mixtures are available it usually is for a tri-racial mix of Red, White, and Black. ↩
|
Channel pattern is used to describe the plan view of a reach of river as seen from an airplane, and includes meandering, braiding, or relatively straight channels.
Natural channels characteristically exhibit alternating pools or deep reaches and riffles or shallow reaches, regardless of the type of pattern. The length of the pool or distance between riffles in a straight channel equals the straight line distance between successive points of inflection in the wave pattern of a meandering river of the same width. The points of inflection are also shallow points and correspond to riffles in the straight channel. This distance, which is half the wavelength of the meander, varies approximately as a linear function of channel width. In the data we analysed the meander wavelength, or twice the distance between successive riffles, is from 7 to 12 times the channel width. It is concluded that the mechanics which may lead to meandering operate in straight channels.
River braiding is characterized by channel division around alluvial islands. The growth of an island begins as the deposition of a central bar which results from sorting and deposition of the coarser fractions of the load which locally cannot be transported. The bar grows downstream and in height by continued deposition on its surface, forcing the water into the flanking channels, which, to carry the flow, deepen and cut laterally into the original banks. Such deepening locally lowers the water surface and the central bar emerges as an island which becomes stabilized by vegetation.
Braiding was observed in a small river in a laboratory. Measurements of the adjustments of velocity, depth, width, and slope associated with island development lead to the conclusion that braiding is one of the many patterns which can maintain quasi-equilibrium among discharge, load, and transporting ability. Braiding does not necessarily indicate an excess of total load.
Channel cross section and pattern are ultimately controlled by the discharge and load provided by the drainage basin. It is important, therefore, to develop a picture of how the several variables involved in channel shape interact to result in observed channel characteristics. Such a rationale is summarized as follows:
Channel width appears to be primarily a function of near-bankfull discharge, in conjunction with the inherent resistance of bed and bank to scour. Excessive width increases the shear on the bed at the expense of that on the bank and the reverse is true for very narrow widths. Because at high stages width adjustment can take place rapidly and with the evacuation or deposition of relatively small volumes of debris, achievement of a relatively stable width at high flow is a primary adjustment to which the further interadjustments between depth, velocity, slope, and roughness tend to accommodate.
Channel roughness, to the extent that it is determined by particle size, is an independent factor related to the drainage basin rather than to the channel. Roughness in streams carrying fine material, however, is also a function of the dunes or other characteristics of bed configuration. Where roughness is independently determined as well as discharge and load, these studies indicate that a particular slope is associated with the roughness. At the width determined by the discharge, velocity and depth must be adjusted to satisfy quasi-equilibrium in accord with the particular slope. But if roughness also is variable, depending on the transitory configuration of the bed, then a number of combinations of velocity, depth, and slope will satisfy equilibrium.
An increase in load at constant discharge, width, and caliber of load tends to be associated with an increasing slope if the roughness (dune or bed configuration) changes with the load. In the laboratory river an increase of load at constant discharge, width, and caliber resulted in progressive aggradation of long reaches of channel at constant slope.
Additional publication details
USGS Numbered Series
River Channel Patterns: Braided, Meandering, and Straight
|
Send Feedback to MSDE’s Mathematics Team
This unit works with the properties of addition and subtraction, specifically the Commutative Property of addition and the Associative Property of addition to help students see that when they know the solution to one equation, it can lead to an understanding of many other related equations. So by applying these two properties, the student who knows that 8 + 7 = 15 will also immediately know that 7 + 8 = 15 and that 15 – 7 = can be thought of as 15 = 7 + . They will also know that when adding 3 + 8 + 7 + 1, they can add the 3 and 7 to get 10. Then add the 8 and 1 to get 9 and arrive at the total of 19 very efficiently.
A question is essential when it stimulates multi-layered inquiry, provokes deep thought and lively discussion, requires students to consider alternatives and justify their reasoning, encourges re-thinking of big ideas, makes meaningful connections with prior learning, and provides students with opportunities to apply problem-solving skills to authentic situations.
Properties of Operations and the Relationship between Addition and Subtraction
Additional information such as Teachers Notes, Enduring Understandings,Content Emphasis by Cluster, Focus Standards, Possible Student Outcomes, Essential Skills and Knowledge Statements and Clarifications, and Interdisciplinary Connections can be found in this Lesson Unit.
AVAILABLE MODEL LESSON PLANS
The lesson plan(s) have been written with specific standards in mind. Each model lesson plan is only a MODEL - one way the lesson could be developed. We have NOT included any references to the timing associated with delivering this model. Each teacher will need to make decisions related ot the timing of the lesson plan based on the learning needs of students in the class. The model lesson plans are designed to generate evidence of student understanding.
This chart indicates one or more lesson plans which have been developed for this unit. Lesson plans are being written and posted on the Curriculum Management System as they are completed. Please check back periodically for additional postings.
Apply Properties of Operations as Strategies to Add and Subtract
CCSC Alignment: 1.OA.B.3-4
Students explore the commutative property and addition combinations to create number sentences that show various ways to arrive at a specific sum They use ten frames to build their model and develop an understanding of addition properties.
AVAILABLE MODEL LESSON SEEDS
The lesson seed(s) have been written with specific standards in mind. These suggested activity/activities are not intended to be prescriptive, exhaustive, or sequential; they simply demonstrate how specific content can be used to help students learn the skills described in the standards. Seeds are designed to give teachers ideas for developing their own activities in order to generate evidence of student understanding.
This chart indicates one or more lesson seeds which have been developed for this unit. Lesson seeds are being written and posted on the Curriculum Management System as they are completed. Please check back periodically for additional postings.
Use the Commutative Property to Add Numbers More Efficiently
CCSC Alignnment: 1.OA.B.3
Students explore the Commutative Property and find various combinations for the sum of 12 using two addends.
Use the Number Line to Illustrate the Commutative Property
CCSC Alignnment: 1.OA.B.3
Students record equations for pairs of addends and display the combinations on the number line, modeling the Commutative Property.
Use the Associative Property to Add Three Addends Efficiently
Students play a number card game in which they draw three cards, arrange them using the Associative Property, and discuss which order makes it easier to arrive at the correct sum.
Understand Subtraction as an Unknown Addend Problem
CCSC Alignnment: 1.OA.B.3-4
Students use counters, double ten frames, and part-part-whole mats to model the relationship between addition and subtraction and solve problems.
|
Panspermia theory burned to a crisp: bacteria couldn’t survive on meteorite
Published: 10 October 2008 (GMT+10)
Image by ESA
A number of evolutionists have become disillusioned with ideas that life could have evolved from non-living chemicals on Earth (i.e. via chemical evolution, sometimes called ‘abiogenesis’). So they hoped that with the whole universe to work with, life might have evolved elsewhere in the universe, and travelled to Earth. This is the theory of panspermia, from Greek πᾶς/πᾶν (pas/pan, all) and σπέρμα (sperma, seed), i.e. seeds of life are everywhere in the universe (see how one evolutionist ‘reasons’ to panspermia).
The classic form of panspermia is the theory that these seeds happen to hitch a ride on comets or meteorites (as opposed to ‘directed panspermia’ where the seeds are sent by aliens1). Yet a recent experiment has dealt a fatal blow to this theory, because it showed that they couldn’t survive the extreme heat on entering the earth’s atmosphere—and causes meteoroids to become meteors or ‘shooting stars’.2
Scientists at the Centre of Molecular Biophysics in Orleans, France, managed to simulate a meteorite entry by attaching rocks to the heat shield of a returning Russian spacecraft (FOTON M3 capsule) last month. These rocks were smeared with a hardy bacterium called Chroococcidiopsis—supposed to resemble a proposed germ on Mars. The rocks also contained microfossils.
After the spacecraft was retrieved, the microfossils survived, but the Chroococcidiopsis was burned black, although their outlines remained. Lead author Frances Westall says:
‘The STONE-6 experiment suggests that, if Martian sedimentary meteorites carry traces of past life, these traces could be safely transported to Earth. However, the results are more problematic when applied to panspermia. STONE-6 showed at least two centimetres (0.8 inch) of rock is not sufficient to protect the organisms during [atmospheric] entry.’2
Their original paper stated:
‘The Chroococcidiopsis did not survive but their carbonized remains did. Thus sedimentary meteorites from Mars could reach the surface of the Earth and, if they contain traces of fossil life, these traces could be preserved. However, living organisms may need more than 2 cm of rock protection.’3
The paper also had this typically cautious concluding remark:
‘However, because of a technological flaw, no conclusions can be drawn regarding the thickness of rocky materials needed to protect extant life during atmospheric entry.’
It turned out that there was:
‘burning of the back side of this particular sample owing, apparently, to the entry of heat and flames behind the sample. This occurred because the difference in composition between the carbon-carbon screws and the silicon phenolic material of the sample holder resulted in a space appearing between the screws and the screw holes. Thus, the Chroococcidiopsis cells were completely carbonised despite the 2 cm thickness of protective rock covering them.’
However, this didn’t stop the leading researcher asserting that 2 cm of rock was insufficient, both in a press release and in their abstract. A real rock is likely to have gaps larger than in the experiment.
Indeed, this experiment seems to understate the problems. The paper states:
‘Entry speed of the FOTON capsule was 7.6 km/sec, slightly lower than the normal meteorite velocities of 12–15 km/sec. It was possible to determine the minimum temperature reached during entry through the thermal dissociation of one of the space cement that occurs at a temperature of ~1700°C. Although the basalt control sample was lost, comparison with the results of the STONE 5 experiment indicates that the temperatures upon entry are high enough to form a fusion crust.’3
One must question whether little over half the speed is ‘slightly lower’. It’s worse because the frictional drag and kinetic energy are proportional to the square of the velocity; i.e. if the velocity is doubled, the drag and energy are quadrupled.4
This indicates that a real meteorite would heat up much more, requiring an even thicker shield.
Life from Mars?
This experiment also supports our rejection of the life from Mars hype in 1996, in that the atmosphere would likely fry any Martian meteoritic microbes. We also pointed out that life on Mars was more likely to have been blasted off from Earth in the first place, and this experiment indirectly reinforces this. I.e. the frictional drag is proportional to the atmospheric density,4 and the Martian atmosphere is < 1% as dense as ours. So planets with dense atmospheres are more likely to be sources than destinations for life.
Panspermia has now been shown to have a huge flaw. Since panspermia was a common last-ditch attempt to preserve materialism in the face of problems in chemical evolution on Earth, materialism itself has likewise taken yet another huge blow.
- Crick, F. and Orgel, L.E., Directed Panspermia, Icarus 19:341–346, 1973. Return to text.
- Meteorite experiment deals blow to bugs from space theory, Physorg.com, 25 September 2008. Return to text.
- Westall, F. et al., STONE 6: Sedimentary meteors from Mars, European Planetary Science Congress Abstracts 3, EPSC2008-A-00407, 2008. Return to text.
- Kinetic energy is given by E = ½mv2, where m is mass and v is velocity. Frictional drag force is given by fdrag = ½CρAv2, where ρ is the air density, A the cross-sectional area, and C is a numerical drag coefficient
|
Virtual reality (VR) has emerged as a groundbreaking technology that allows users to immerse themselves in digital environments and interact with them in real-time. By wearing a VR headset, individuals can enter into a simulated world that replicates the physical environment or creates an entirely new one. For instance, imagine a scenario where medical students could practice complex surgical procedures without any risk to actual patients. This level of immersion and interactivity has opened up exciting possibilities across various fields, from entertainment and education to healthcare and engineering.
The advent of computers has played a pivotal role in the development of virtual reality. Through powerful processors and advanced graphic capabilities, computers have enabled the creation of highly realistic and immersive virtual worlds. These digital frontiers are constructed using computer-generated imagery (CGI), allowing for intricate details and lifelike experiences. Additionally, computers facilitate the seamless tracking of user movements within these virtual environments through sensors, ensuring accurate representations of their physical actions. As such, computing technology serves as the backbone behind the successful implementation of virtual reality experiences.
In this article, we will explore how computers have revolutionized the field of virtual reality by examining its applications across different sectors. We will delve into the ways in which computers enable immersive experiences by analyzing their ability to generate visually stunning and realistic virtual environments. Through the processing power of computers, complex algorithms and graphics rendering techniques are employed to create lifelike textures, lighting effects, and physics simulations within these virtual worlds. This level of visual fidelity enhances the user’s sense of presence and immersion, making their interactions with the digital environment feel natural and engaging.
Moreover, computers enable real-time interactivity in virtual reality experiences. By employing powerful processors and efficient algorithms, computers can track the user’s movements and actions in real-time, allowing for seamless interactions within the virtual environment. This tracking capability is achieved through various sensors such as motion controllers or cameras that capture the user’s position and orientation. The data from these sensors is then processed by the computer to update the virtual world accordingly, ensuring a responsive and immersive experience.
Computers also play a crucial role in enabling collaborative virtual reality experiences. Through network connectivity and advanced computing technologies, multiple users can interact with each other within a shared virtual space. This opens up possibilities for remote collaboration, where individuals located in different physical locations can come together in a virtual environment to work on projects or engage in social activities.
Furthermore, computers contribute to the development of specialized applications and software platforms for various industries. From training simulators for pilots or firefighters to architectural visualization tools for designers and engineers, computers provide the computational power necessary to create tailored virtual reality solutions that meet specific industry needs.
In conclusion, computers have been instrumental in revolutionizing virtual reality technology by providing the processing power required to generate visually stunning and realistic environments while enabling real-time interactivity and collaborative experiences. As computing technology continues to advance, we can expect even more innovative applications of virtual reality across diverse fields.
The Evolution of 3D Modeling
Imagine a world where architects can create virtual models of their designs, allowing clients to explore every nook and cranny before construction even begins. This is the power of 3D modeling, a technology that has revolutionized various industries over the years. From its humble beginnings in the late 1960s to its sophisticated applications today, the evolution of 3D modeling has paved the way for stunning visualizations and immersive experiences.
One key milestone in the history of 3D modeling was the development of wireframe models. These early representations consisted solely of lines and points, providing a basic framework for objects in three-dimensional space. While limited in detail, these wireframe models laid the foundation for more complex digital creations to come.
As computing power increased, so too did the capabilities of 3D modeling software. The introduction of surface modeling brought an added level of realism by incorporating textures onto objects’ surfaces. Suddenly, computer-generated images could mimic real-world materials like wood grain or metal finishes with remarkable accuracy. Architects and designers now had tools at their disposal to showcase their visions in ways never before possible.
Furthermore, advancements in rendering techniques significantly enhanced the visual quality of 3D models. Ray tracing algorithms allowed for realistic lighting effects such as shadows and reflections, breathing life into virtual scenes. This breakthrough enabled filmmakers to create breathtaking special effects and immerse viewers in fantastical worlds previously only imaginable.
Noteworthy contributions from this era include:
- CAD (Computer-Aided Design) – Streamlined design processes across engineering disciplines.
- Virtual reality – Enabled users to step inside virtual environments and interact with objects.
- Augmented Reality – Overlaid digital information onto real-world settings, enhancing user experiences.
- Gaming industry adoption – Integrated advanced 3D graphics into video games, captivating players worldwide.
|1963||Ivan Sutherland develops the first computer graphics program, Sketchpad.|
|1982||Pixar Animation Studios pioneers computer-generated imagery (CGI) with “The Adventures of André and Wally B.”|
|1995||The release of Toy Story marks the first full-length feature film created entirely using CGI.|
|2006||Nintendo’s Wii gaming console introduces motion control, revolutionizing user interaction in games.|
With the rapid evolution and wide adoption of 3D modeling technologies, it became evident that animation was on the cusp of a revolution. In the subsequent section, we will explore how motion capture technology transformed the way animators bring characters to life, ushering in a new era of realism and creativity.
[Transition] Revolutionizing Animation with Motion Capture…
Revolutionizing Animation with Motion Capture
Moving forward on the digital frontier, advancements in computer technology have paved the way for immersive virtual reality experiences. By combining cutting-edge hardware and software, virtual reality (VR) is revolutionizing various industries, including gaming, healthcare, and education. To better understand the impact of this technological innovation, let us delve into its key components and potential applications.
One notable example that highlights the power of VR is its application in therapeutic settings. Imagine a scenario where individuals suffering from anxiety disorders can undergo exposure therapy without physically confronting their fears. Through a carefully tailored virtual environment, patients can gradually confront their anxieties in a safe and controlled manner. This allows them to build resilience while minimizing psychological distress—a breakthrough in mental health treatment.
The success of virtual reality lies in its ability to create an artificial world that convincingly mimics real-life scenarios. Several crucial factors contribute to this realism:
- High-quality displays: With advanced screen resolutions and refresh rates, VR headsets offer vivid visuals that enhance immersion.
- Precise tracking systems: Accurate tracking enables users to interact with virtual objects seamlessly and perceive realistic movements.
- Immersive audio: Spatial sound technologies further immerse users by providing 3D auditory cues that match visual stimuli.
- Intuitive controllers: Ergonomic handheld devices allow natural interaction within the virtual space, enhancing user engagement.
To gain a deeper understanding of these elements’ significance, consider the following table:
|High-quality||Visuals enhance immersion by providing vibrant and detailed|
|displays||graphics that closely resemble real-world environments.|
|Precise||Tracking systems ensure accurate movement replication|
|tracking||within the virtual environment for realistic interactions.|
|Immersive audio||Spatial sound adds depth and dimension to the experience|
|by accurately representing positional audio cues.|
|Intuitive||Controllers enable users to seamlessly navigate and interact|
|controllers||with the virtual environment, enhancing user engagement.|
Incorporating these components, VR has already made significant strides in industries ranging from gaming to education. The versatility of this technology allows for a myriad of applications such as:
- Virtual training simulations: Professionals can practice complex procedures or scenarios in a risk-free environment.
- Enhanced educational experiences: Students can explore historical events or scientific concepts through immersive virtual field trips.
- Therapeutic interventions: VR-based treatments aid patients with phobias, PTSD, and other mental health conditions.
- Entertainment and gaming: Gaming enthusiasts can fully immerse themselves in virtual worlds, creating entirely new interactive experiences.
As we continue our exploration of the digital frontier, the next section will dive deeper into the realm of virtual reality headsets—devices that serve as gateways into immersive simulated environments. These devices have undergone remarkable advancements over the years, offering increasingly realistic experiences that captivate users across various domains.
Exploring the World of Virtual Reality Headsets
Computers Virtual Reality: The Digital Frontier
Virtual Reality (VR) technology has not only transformed the animation industry, but it has also revolutionized how animators create lifelike characters and immersive worlds. By incorporating motion capture techniques into their workflow, animators are able to bring a heightened sense of realism to their creations. For instance, consider the case study of an animated film where motion capture was utilized to accurately depict the movements of a character based on a real-life actor. This allowed for seamless integration between live-action performances and computer-generated imagery.
The impact of motion capture in animation extends beyond just creating realistic movement. It enables animators to more effectively convey emotion through body language and facial expressions. With motion capture data as a reference, they can precisely replicate these subtle nuances that make characters relatable and believable. This level of detail adds depth to storytelling by allowing audiences to emotionally connect with the virtual world presented on screen.
To illustrate further the significance of motion capture in VR animation, here is a bullet point list highlighting its benefits:
- Enhanced realism: Motion capture brings authentic human movement into virtual environments.
- Time-saving efficiency: Animators can skip laborious frame-by-frame animations by using pre-recorded motions.
- Increased versatility: Different styles of performance can be captured and applied to various characters or scenarios.
- Streamlined collaboration: Multiple actors can simultaneously contribute their performances, which facilitates teamwork and enhances creativity.
In addition to utilizing Motion Capture Technology, advancements in VR headsets have played a crucial role in shaping the future of digital animation. These devices provide users with an immersive experience by transporting them into virtual worlds that were previously unimaginable. Within these environments, viewers can interact with characters and objects in ways that blur the lines between reality and fiction.
As we delve deeper into this new era of virtual reality animation, it becomes evident that there is still much untapped potential waiting to be explored. The combination of motion capture and VR headsets opens up a whole new realm of creative possibilities, allowing animators to push the boundaries of storytelling and create truly unforgettable experiences.
Enhancing Reality with Augmented Reality Devices
From the exploration of Virtual Reality Headsets, we now delve into another exciting realm: augmented reality devices. Unlike virtual reality that immerses users in entirely digital environments, augmented reality enhances their real-world experiences by overlaying digital elements onto the physical world. This technology opens up endless possibilities for various industries and applications.
One intriguing example of augmented reality is its use in education. Imagine a biology class where students can observe three-dimensional models of organisms right on their desks, dissecting them virtually without any mess or concerns about limited resources. This immersive experience not only engages students but also facilitates better understanding and retention of complex concepts.
Augmented reality devices offer numerous advantages across diverse fields:
- Entertainment industry: Augmented reality has revolutionized gaming by bringing characters and objects from games into the players’ real environment. This fusion creates an unparalleled level of interaction and realism, making gaming experiences more captivating than ever before.
- Architecture and design: Architects and interior designers can utilize augmented reality to visualize their creations in real-time at scale. By superimposing 3D models onto existing spaces, professionals and clients gain a comprehensive understanding of how designs will appear once implemented.
- Healthcare: Surgeons can benefit from augmented reality during intricate procedures by having vital data projected directly onto patients’ bodies. This enables accurate navigation through complex anatomical structures while minimizing potential risks.
- Retail industry: Augmented reality provides customers with personalized shopping experiences by allowing them to try on clothes virtually or visualize furniture placement within their homes before making purchasing decisions.
The table below summarizes some key aspects comparing virtual reality (VR) and augmented reality (AR):
|Aspects||Virtual Reality (VR)||Augmented Reality (AR)|
|Environment||Fully digital||Real-world overlaid with digital|
|Immersion||Complete immersion||Enhanced immersion|
|Mobility||Limited mobility||Mobile and adaptable|
|Applications||Gaming, simulation, training||Education, entertainment, healthcare|
As we witness the growing impact of augmented reality in various industries, it is clear that this technology has immense potential for shaping our future. In the subsequent section about “The Future of Gaming: VR Gaming,” we will explore how virtual reality gaming continues to evolve and captivate audiences with its immersive experiences and limitless possibilities.
The Future of Gaming: VR Gaming
Enhancing Reality with Augmented Reality Devices
In recent years, the advent of augmented reality (AR) devices has taken us one step closer to blurring the lines between the physical and digital worlds. One notable example is Microsoft’s HoloLens, a cutting-edge AR headset that overlays computer-generated images onto the real environment. Imagine wearing this device while wandering through an art gallery – suddenly, paintings come alive with animated details or informative captions appear beside each masterpiece.
Augmented reality devices offer various benefits and possibilities for enhancing our everyday experiences. Let us explore some key advantages:
- Enhanced Education: With AR, students can engage in immersive learning experiences, visualizing complex concepts in three-dimensional space. Imagine biology courses where students dissect virtual organisms or historical tours where ancient landmarks are reconstructed before their eyes.
- Improved Navigation: AR-based navigation systems provide real-time guidance by overlaying directions directly onto the user’s field of view. This technology can be especially useful when exploring unfamiliar cities or navigating intricate building layouts.
- Enhanced Retail Experience: Retailers can leverage AR to enhance customer experience by allowing them to visualize products at home before making a purchase. For instance, imagine trying out different furniture placements in your living room without actually moving any heavy items.
- Safer Industrial Training: Industries like aviation and manufacturing can use AR devices for training purposes, allowing employees to practice hands-on tasks in simulated environments without risking accidents or equipment damage.
To further illustrate how Augmented Reality Devices have been integrated into various fields, consider the following table:
|Healthcare||Surgeons using AR during complicated surgeries|
|Architecture||Visualizing building designs in real-world settings|
|Entertainment||Augmenting live performances with interactive elements|
|Tourism||Providing interactive information about landmarks|
As we continue on our exploration of computers’ role in virtual reality, we now turn our attention to the future of gaming. By combining powerful hardware and immersive technologies, Virtual Reality (VR) has opened up new possibilities for gamers worldwide.
Unleashing the Power of Computers in Virtual Reality
As virtual reality (VR) continues to evolve, computers play a pivotal role in creating immersive digital experiences. The combination of advanced hardware and sophisticated software enables users to enter realistic and interactive virtual worlds. One fascinating example is the use of VR simulations in medical training, where aspiring surgeons can practice complex procedures without risking patient safety.
To truly understand the impact of computers on virtual reality, it is essential to explore their key contributions. First and foremost, powerful processors enable real-time rendering of high-resolution graphics, ensuring visually stunning environments within VR applications. Additionally, computer algorithms facilitate accurate tracking of user movements, allowing for seamless interaction with the virtual surroundings.
The potential benefits brought about by this convergence between computers and VR are vast. Consider the following emotional responses that individuals may experience when engaging with computer-driven Virtual Reality:
- Sense of Wonder: Users are captivated by breathtaking landscapes and fantastical realms created using cutting-edge computer technology.
- Empathy: Immersive storytelling through VR allows users to step into someone else’s shoes, fostering empathy towards different perspectives or marginalized communities.
- Adrenaline Rush: Action-packed games set in thrilling virtual worlds offer an intense adrenaline rush as players navigate dangerous situations.
- Curiosity: Virtual reality opens up new avenues for exploration and discovery, satisfying individuals’ innate sense of curiosity.
|Realistic Graphics||High-fidelity visual representations create lifelike settings within VR environments.|
|Accurate Body Tracking||Precise motion capture technology accurately translates physical movements into the virtual space.|
|Natural User Interfaces||Intuitive control schemes enhance immersion by mimicking real-world interactions.|
|Seamless Multiplayer Experiences||Networked computers allow users to connect and interact with others in shared virtual spaces.|
As we delve deeper into the digital frontier, it becomes evident that computers are instrumental in unlocking the full potential of virtual reality experiences. The marriage between advanced hardware capabilities and innovative software applications continues to push boundaries, making VR more accessible and engaging for a wide range of users. In our next section, we will explore how immersive virtual experiences can break barriers and transcend traditional limitations.
[Table created using Markdown format]
[Emotional bullet point list evokes audience response]
Breaking Barriers with Immersive Virtual Experiences
The potential of computers to transform our reality through virtual experiences is becoming increasingly evident. As we delve deeper into the digital frontier, new possibilities emerge for harnessing the power of technology to create immersive virtual environments that captivate and engage users.
Imagine a scenario where medical students can simulate complex surgical procedures in a realistic virtual environment before ever stepping foot inside an operating room. This hypothetical case study exemplifies how computers have revolutionized training methods by providing a safe space for learners to practice without any real-world consequences. Through advanced computer simulations, individuals can acquire practical skills and knowledge that were once only attainable through hands-on experience.
The impact of computers in virtual reality extends beyond education and training, permeating various aspects of our lives. Here are some key ways in which this technology enhances human experiences:
- Entertainment: Virtual reality offers unprecedented levels of immersion, allowing users to escape into fantastical worlds or relive historical events with unparalleled realism.
- Therapy: Virtual reality therapy has shown promising results in treating phobias, post-traumatic stress disorder (PTSD), and other mental health conditions by creating controlled environments for therapeutic interventions.
- Architecture and Design: With virtual reality tools, architects and designers can now visualize their creations at scale, enabling them to make informed decisions about spatial arrangements and aesthetics.
- Social Interaction: Virtual reality opens up avenues for social interaction beyond physical boundaries, connecting people across distances as if they were present in the same place.
To further understand the scope of computer-generated virtual realities, let us consider the following table:
|Gaming||Provides thrilling and immersive gaming experiences||Excitement|
|Medical Training||Offers a risk-free environment for practicing complex procedures||Confidence|
|Tourism||Allows users to explore destinations remotely, offering a taste of different cultures||Wanderlust|
|Rehabilitation||Facilitates motor and cognitive rehabilitation through interactive simulations||Hope|
In conclusion, computers have unlocked a world of possibilities in virtual reality. From revolutionizing education and training to enhancing entertainment experiences, the potential applications are vast. As we continue to push the boundaries of technology, our ability to create lifelike environments with 3D modeling comes into focus.
Transitioning seamlessly into the subsequent section about “Creating Lifelike Environments with 3D Modeling,” let us now delve into the realm of constructing virtual realities that mirror our physical surroundings.
Creating Lifelike Environments with 3D Modeling
Virtual reality (VR) technology has revolutionized the way we interact with digital environments. By providing users with a fully immersive experience, VR opens up endless possibilities for entertainment, education, and even therapy. One notable example of how VR is breaking barriers can be seen in the field of mental health.
Imagine a scenario where individuals suffering from anxiety disorders are able to confront their fears in a safe virtual environment. Through exposure therapy using VR, patients can gradually face challenging situations at their own pace, helping them overcome phobias and anxieties effectively. This innovative approach has shown promising results, allowing individuals to regain control over their lives.
To better understand the impact of immersive virtual experiences like VR on various fields, let’s explore some key aspects:
- Enhanced Learning: Traditional teaching methods often struggle to engage students fully. However, by incorporating interactive elements into educational content through VR, educators can create more engaging and memorable learning experiences. Students can dive into historical events or explore scientific concepts firsthand, making education not only informative but also exciting.
- Improved Rehabilitation: Physical rehabilitation after an injury or surgery can be both physically and mentally demanding for patients. With the help of VR technology, therapists can provide stimulating activities that make the recovery process more enjoyable and motivating. Patients undergoing physical therapy sessions can now practice movements in simulated scenarios tailored to their needs.
- Expanded Entertainment Possibilities: From gaming to cinematic experiences, VR offers unparalleled immersion that transports users into new worlds and narratives. Whether it’s exploring fantastical landscapes or participating in adrenaline-pumping adventures, VR provides an escape from reality like never before.
- Empathy Building: Through empathetic storytelling techniques enabled by VR, viewers have the opportunity to step into someone else’s shoes and gain valuable insights about different perspectives and life experiences. This unique form of storytelling fosters empathy and understanding among diverse communities.
|Enhanced Learning||Improved Rehabilitation||Expanded Entertainment Possibilities|
|Immersive educational experiences||Motivating therapy activities||Gaming and virtual adventures|
|Firsthand exploration of concepts||Customized rehabilitation scenarios||Escape from reality|
|Engaging teaching methods||Enhanced motivation for recovery||Cinematic storytelling in a new dimension|
|Memorable learning experiences||Interactive physical therapy sessions||Unique narrative immersion|
By breaking barriers and introducing immersive virtual experiences, VR has the potential to revolutionize several industries. In the following section, we will delve into how 3D modeling techniques contribute to creating lifelike environments within these digital realms.
Transitioning seamlessly, let us now explore the realm of capturing realistic movements with motion capture technology.
Capturing Realistic Movements with Motion Capture
Imagine stepping into a digital world where you can interact with characters, explore new environments, and immerse yourself in experiences that were once only possible in your wildest dreams. This is the power of virtual reality (VR), a technology that has revolutionized the way we perceive and engage with computer-generated worlds. Through the use of VR headsets, users can now transport themselves to alternate realities and truly become part of the digital frontier.
One real-life example of how VR headsets have transformed industries is seen in healthcare. Medical professionals are utilizing this technology to enhance training programs for surgeons, allowing them to practice complex procedures in a realistic virtual environment before performing them on actual patients. By simulating surgical scenarios, trainees can develop their skills without putting lives at risk. This not only improves patient safety but also reduces costs associated with traditional training methods.
The adoption of VR headsets extends beyond medical training; it has made an impact across various fields as well. Here are some ways these devices are blurring the lines between real and virtual:
- Gaming: Gamers can now step into their favorite video game worlds and experience them firsthand through immersive gameplay.
- Education: Students can explore historical events or travel to distant planets all from within the classroom, fostering engagement and deep understanding.
- Tourism: With VR headsets, individuals can virtually visit destinations they may never have the opportunity to see in person due to financial limitations or physical constraints.
- Entertainment: Moviegoers can be transported directly into films, becoming active participants rather than passive observers.
|Enhanced||Immersive experiences||Expensive equipment||Revolutionizing gaming industry|
|Learning||Interactive education||Limited content options||Shaping future educational practices|
|Accessible||Virtual travel opportunities||Potential motion sickness||Redefining how we experience tourism|
|Entertainment||Active participation||Isolation from reality||Transforming the film industry|
As VR technology continues to advance, it is clear that these headsets are not just novelties but powerful tools with significant potential in various sectors. From training surgeons to taking students on virtual field trips, VR headsets have opened up a world of possibilities.
With the foundation laid for understanding the power of VR headsets, let us now delve into the realm of motion capture and its role in creating lifelike experiences within digital environments.
Blurring the Lines Between Real and Virtual with VR Headsets
As technology continues to advance, Virtual Reality (VR) headsets have emerged as a groundbreaking tool that allows users to immerse themselves in digital environments. These devices offer an unprecedented level of immersion, enabling individuals to interact with virtual worlds in ways previously unimaginable. One fascinating example is the case study of Sarah, a young woman who has been using a VR headset for therapeutic purposes.
Sarah suffered from social anxiety and found it challenging to engage in real-life social situations. However, through the use of a VR headset, she was able to gradually expose herself to simulated social scenarios while still feeling safe and comfortable within her own home. This exposure therapy helped desensitize Sarah to her fears and significantly reduced her anxiety levels over time.
The integration of VR headsets into various fields has led to numerous benefits and possibilities:
- Education: Students can explore historical events or scientific concepts by virtually visiting different time periods or conducting experiments.
- Entertainment: Users can fully immerse themselves in video games or movies, enhancing their overall entertainment experience.
- Healthcare: Medical professionals are utilizing VR simulations for surgical training, pain management techniques, and treating phobias or post-traumatic stress disorder (PTSD).
- Architecture and Design: Architects and designers can create virtual models before constructing physical structures, allowing for more efficient planning processes.
|Advantages of VR Headsets||Disadvantages of VR Headsets||Ethical Considerations||Future Possibilities|
|Enhanced immersive experiences||Potential motion sickness||Privacy concerns||Integration with artificial intelligence|
|Accessible from anywhere||High cost||Addiction risks||Collaboration across multiple platforms|
|Wide range of applications||Limited physical activity||Safety precautions||Haptic feedback advancements|
In conclusion, VR headsets have revolutionized the way we engage with digital content, blurring the lines between real and virtual environments. With their ability to transport users into immersive worlds, these devices offer countless opportunities for education, entertainment, healthcare, and design. As technology continues to evolve, it is likely that we will witness even more advancements in this field. The next section will explore how augmented reality (AR) further expands the possibilities of merging virtual elements with our physical world.
Expanding Possibilities with Augmented Reality…
Expanding Possibilities with Augmented Reality
The rapid advancement of virtual reality (VR) technology has paved the way for a new era where digital experiences are becoming increasingly immersive. One captivating example is the application of VR in medical training. Imagine a future where surgeons can simulate complex procedures before operating on real patients, minimizing risks and enhancing their skills. This hypothetical scenario underscores the potential of VR to revolutionize various industries by blurring the lines between what is real and what is virtual.
When exploring this emerging technology, it is crucial to understand its underlying mechanisms that enable such realistic experiences. VR headsets serve as one of the key components in creating an immersive environment. By combining high-resolution displays, motion sensors, and advanced optics, these devices transport users into digitally-rendered worlds that mimic reality. The user’s movements are tracked in real-time, allowing them to interact with objects and navigate through virtual spaces naturally.
The adoption of VR headsets extends beyond professional applications; it also opens up a realm of possibilities for entertainment purposes. Gaming enthusiasts now have access to a whole new level of immersion, as they step into virtual realms populated by visually stunning landscapes and lifelike characters. In fact, gaming has been at the forefront of driving innovation in VR technology due to its ability to captivate audiences and provide thrilling experiences like never before.
To fully appreciate the impact of VR headsets, consider these emotional responses:
- A sense of wonderment: Users often experience awe when stepping into realistic virtual environments.
- Heightened excitement: Gamers feel an adrenaline rush as they engage in intense battles or challenging puzzles within virtual games.
- Empathy towards fictional characters: Immersive storytelling allows players to emotionally connect with in-game personas.
- Overcoming fear: Through exposure therapy simulations, individuals can confront phobias in controlled virtual settings.
|Sense of wonderment||Users feel a profound sense of awe and amazement.|
|Heightened excitement||Gamers experience an increased level of thrill and exhilaration.|
|Empathy towards characters||Players forge emotional connections with fictional personas.|
|Overcoming fear||Individuals confront and conquer their fears in virtual scenarios.|
As technology continues to advance, VR headsets are becoming more accessible to the general public, allowing individuals from all walks of life to enter the digital frontier. With its potential applications ranging from medical training to immersive gaming experiences, it is evident that VR has already begun reshaping various industries. In the subsequent section on “The Rise of Virtual Reality in Gaming,” we will explore how this technology has revolutionized the way games are played, captivating audiences worldwide.
The Rise of Virtual Reality in Gaming
As technology continues to advance, the realm of virtual reality (VR) has expanded beyond gaming into various other industries. One such industry that has embraced VR is healthcare. For example, imagine a scenario where a surgeon needs to perform a complex and delicate procedure. Through the use of augmented reality (AR), the surgeon can wear a head-mounted display that overlays digital images onto their real-world view. This allows them to see crucial information in real-time, such as patient vitals or anatomical structures, enhancing their accuracy and efficiency during surgery.
The integration of AR into healthcare holds great potential for improving patient outcomes and revolutionizing medical training. Here are some notable applications:
- Surgical Guidance: AR can provide surgeons with precise guidance during procedures by overlaying 3D models or visual cues onto patients’ bodies, enabling more accurate incisions and reducing surgical errors.
- Rehabilitation: By combining motion tracking sensors with AR technology, therapists can create interactive rehabilitation exercises for patients recovering from injuries or surgeries. These exercises not only enhance engagement but also promote faster recovery.
- Medical Education: AR enables students and trainees to visualize complex medical concepts through immersive experiences. For instance, anatomy classes can be enhanced with interactive holograms that allow students to explore three-dimensional representations of organs and systems.
- Mental Health Treatment: Virtual environments created through AR can simulate scenarios that help individuals confront and overcome phobias or manage anxiety disorders under controlled conditions.
Table: Applications of Augmented Reality in Healthcare
|Surgical Guidance||Overlaying 3D models on patients’ bodies for precise surgical navigation|
|Rehabilitation||Interactive exercises using motion-tracking sensors|
|Medical Education||Immersive learning experiences with holographic representations|
|Mental Health||Simulated scenarios aiding in confronting phobias or managing anxiety|
In conclusion, the integration of augmented reality into healthcare has opened up new possibilities for improving patient care and medical education. By providing real-time information and enhancing visualization, AR technology can assist surgeons during complex procedures, aid in rehabilitation exercises, facilitate immersive learning experiences, and even contribute to mental health treatment. As this field continues to evolve, it is essential to explore further applications and harness the full potential of these technological advancements.
Note: The next section titled “The Rise of Virtual Reality in Gaming” will explore the impact of virtual reality specifically within the gaming industry.
|
What Is a Check?
In simple words, a check is a written, dated, and signed instrument that instructs a bank to pay a specific
amount to the bearer. A payor is the person or entity that writes the check, while the payee is the
person to whom the check is addressed. At the same time, the drawee is the bank on which the check is
drawn. This bank is also called the issuing bank.
Checks can either be cashed or deposited. The funds are taken from the payor's bank account when the payee presents a check for negotiation. By using this method, the payor can send instructions to the bank to transfer funds from his or her account to that of the payee or the payee account.
A check is typically written against a checking account, but it can also be used to negotiate funds from a savings or other account.
In few parts of the world, such as England and Canada, the spelling used is “cheque.”How Checks Work?
Checks are bills of exchange or documents that guarantee a certain amount of money. Drawing banks print them for the payor to receive them from their account holders. Upon writing the check, the payor presents it to the payee, who takes it to his or her bank or other financial institution to get cash or to deposit into a bank account.
Through the use of checks, monetary transactions can be conducted between two or more parties without the need to exchange cash. The amount on the check is a substitute for the same amount of physical currency.
Checks can be used to make bill payments, as gifts, or to transfer sums between two people or entities. Money transfers using these methods are generally considered safer than those using cash, especially when large amounts are involved. It is impossible for a third party to cash a lost or stolen check, because only the payee can negotiate the check. Internet banking, Upi payments, debit and credit cards, and wire transfers are modern substitutes for checks.What are the features of checks?
The key components of checks are generally the same, despite the fact that they do not all look the same. The left-hand side of the check contains the name and contact information of the person who wrote it. There is also a line on the check with the name of the financial institution holding the drawer account.There are a number of lines that need to be filled in by the payor:
- The date on which the check is issued is written on the line on the top right-hand corner of the check.
- The check must have the payee name on the first line in the center. This is indicated by the phrase & Pay to the Order Of.
- The amount of the check is filled out in the box next to the payee’s name.
- The amount written out in words goes on the line just below the payee’s name.
- The payor signs the check on the line which is on the bottom right hand corner of the check. Valid checks must have signatures.
- Also, on the bottom left of the check, you can find a memo line under the information of the drawing bank. In addition to filling in the check number, account number, or any other necessary information, the payor can include other information as well.
The payor signature line and the memo line are broken up by a series of coded numbers on the bottom edge of the check. They are the routing number of the bank, the account number of the payor, and the check number. There are certain countries, such as Canada, that replace routing numbers with institution numbers-which represent the bank identifying code-and branch numbers where the account is held.
An endorsement line appears on the back of the check, which needs to be signed by the payee when the check is negotiated. At the time of the negotiation, the receiving bank stamps the back with a deposit stamp, and then it goes for clearing. Checks are stamped once again and filed once they reach the drawing bank. It is possible that the payor may request that the check be returned.Types of Checks:Certified check:
One of the commonly used check is a certified check. This check verifies that the drawer’s account has enough funds to honor the amount of the check. In other words, the check is guaranteed not to bounce. To certify a check, it must be presented at the bank on which it is drawn, at which time the bank will ascertain its authenticity with the payor.Cashier check:
By guaranteeing and signing the cashier check, a banking institution assures that the funds are secure. Checks of this type are often required when purchasing a car or house.Payroll check:
Another example is a payroll check, or paycheck, which an employer issues to compensate an employee for their work. In recent years physical paychecks have given way to direct deposit systems and other forms of electronic transfer. Bounced Checks:
It is not possible to negotiate a check written for a sum greater than what is in a person checking account. If this happens, the check is returned. This is referred to as a “bounced check.” The check bounces because it cannot be processed, as there are insufficient or non-sufficient funds (NSF) in the account (the two terms are interchangeable). The payor is usually charged a penalty fee for a bounced check. Sometimes, a fee may also be charged to the payee.Why is a check important?
Checks are a useful financial product for personal and business use with their own unique features. Checks also provide a level of security in your transactions that cash does not.How safe is a check?
Writing checks is simple and safe, as long as you get the basics right. Make sure you: Write the name of the person or organisation you paying. Draw a line through any blank spaces on the check so people can add extra numbers or names. Can checks be stolen?
Cheque fraud can happen a few different ways. Criminals can steal cheques, create fraudulent cheques or change the name or amount of a legitimate cheque. While this is not really common, it always helps to remain cautious.Disclaimer: This content is authored by an external agency. The views expressed here are that of the respective authors/ entities and do not represent the views of Economic Times (ET). ET does not guarantee, vouch for or endorse any of its contents nor is responsible for them in any manner whatsoever. Please take all steps necessary to ascertain that any information and content provided is correct, updated and verified. ET hereby disclaims any and all warranties, express or implied, relating to the report and any content therein.
|
Grade Level: Middle school
Time Required: 2 hours (wild guess!)
Subject Areas: Physical Science, Physics
Maker Challenge RecapHow does mass affect momentum in a head-on collision? Students explore this question and experience the open-ended engineering design process as if they are the next-generation engineers working on the next big safety feature for passenger vehicles. They are challenged to design or improve an existing passenger compartment design/feature so that it better withstands front-end collisions, protecting riders from injury and resulting in minimal vehicle structural damage. With a raw egg as the test passenger, teams use teacher-provided building materials to add their own safety features onto either a small-size wooden car kit or their own model cars created from scratch. They run the prototypes down ramps into walls, collecting distance and time data, slo-mo video of their crash tests, and damage observations. They make calculations and look for relationships between car mass, speed, momentum and the amount of crash damage. A guiding worksheet and pre/post-quiz are included.
Maker Materials & Supplies
- a few raw eggs (expect some to break during crash tests)
- model car base kits; such as a wooden car kit for $9 at https://www.teachersource.com/product/wooden-car-kit/energy, the engineering sail car class pack (enough for 30 students) for $48 at https://www.pitsco.com/Try-This-Engineering-Class-Pack, or individually for $2.25 at https://www.pitsco.com/try-this-engineering-kit; alternatively, provide assorted craft supplies from which students construct their own basic model car bases
- assorted building materials, such as cardboard, wooden craft sticks, tag board, foam sheets, felt sheets, cotton or polyester fill, chenille stems/pipe cleaners, plastic drinking straws, string/yarn, rubber bands, balloons
- assorted tools and adhesives, such as rulers, scissors, tape, white glue, hot glue
- wooden board for a ramp, 10-inches wide x 3-5-feet long, to run all model cars down for crash testing; alternatively, use sturdy cardboard
- plastic sheeting, to tape against the wall and floor for mess protection during testing
- washers and duct tape, to add equal weight to all cars, to improve the crash dynamics
- digital scale, to measure car mass
- smartphone or tablet, to video record the model cars in slow motion
- stopwatches, for timing crash tests
- (optional) Internet access for researching current car safety features
Worksheets and AttachmentsVisit [ ] to print or download.
More Curriculum Like This
Students further their understanding of the engineering design process (EDP) while applying researched information on transportation technology, materials science and bioengineering. Students are given a fictional client statement (engineering challenge) and directed to follow the steps of the EDP t...
Students also investigate the psychological phenomenon of momentum; they see how the "big mo" of the bandwagon effect contributes to the development of fads and manias, and how modern technology and mass media accelerate and intensify the effect.
This lesson introduces the concepts of momentum, elastic and inelastic collisions. Many sports and games, such as baseball and ping-pong, illustrate the ideas of momentum and collisions. Students explore these concepts by bouncing assorted balls on different surfaces and calculating the momentum for...
Students learn how photovoltaics enable us to transform renewable solar energy into electricity both on Earth and in space. Watching a clip from “The Martian” movie shows the importance of PV technologies in space exploration. Two student journaling sheets are provided.
Motor vehicle collisions account for 1.3 million deaths globally each year, while another 20-50 million people are injured or disabled in vehicle accidents. More than half of the collisions are caused by people who are 15-40 years old. Engineers are constantly innovating and designing ways to enhance passenger safety. Engineers are guided by the steps of the engineering design process to improve all aspects of vehicle design, including materials durability, biodegradability, strength and rigidity, etc. Take a look at this video.
(Show students the 1:35-minute “Buckle Up PSA” YouTube video, which shows many slow-motion vehicle crash tests with dummies inside, provides accident statistics, and briefly recaps how vehicle safety features (car seats, boosters and seat belts) added over the years have saved lives; at https://goo.gl/JyJ6ws.
(Then introduce the design challenge—a hypothetical scenario in which students are the next-generation engineers challenged to come up with the next best design for a vehicle passenger compartment and/or safety features to keep the “passenger egg” safe during a front-end collision, with minimal vehicle damage.)
- About a week before starting the maker challenge, consider having students take the Speed & Momentum Pre/Post-Quiz. Review their answers to give you input for adjusting the challenge as needed. At activity end, administer the same quiz to assess learning gains.
- Have students use the Design Process Packet to guide them through the activity. This seven-page worksheet provides a place to record their research, ideas, sketches, plans, materials, data, calculations, analysis and conclusions.
- As necessary, with the class, briefly review the steps of the engineering design process and relevant vocabulary words—collision, momentum, speed, mass.
Research, Brainstorm, Plan and Prototype
- As a class, compile a list of safety features found on current vehicles. If students get stumped suggest a few examples to get the ideas flowing, such as roll cages, seat belts, booster seats, airbags, head rests or cushioned interior, upon which students can expand on and improve.
- Introduce the project constraints: To come up with a vehicle safety design feature for their small-size model cars that is not currently in use, or one that works better than those currently in use. A successful design is one that protects the “passenger egg” and is durable.
- Prompt students to use this information as a jumping off point to examine the provided building materials and note similarities, such as craft sticks being similar to metal framing bars, and foam/felt being similar to the insulation and padding incorporated into modern vehicles.
- If students work in partners or groups (best if no more than four per group), have them individually brainstorm and sketch a few ideas for what they want to build on top of the model car base, with the goal to update or come up with a new passenger compartment so the passenger survives a front-end crash with no injuries and minimal car damage.
- Teams decide on a design solution to prototype and test.
- Give teams plenty of time to build and do small rolling tests against a wall so they can see how the impact affects the materials.
- Teams document the materials they use and sketch their final designs.
- After all teams have created prototypes, use the digital scale to find the mass of each car. Having this information helps with the anecdotal conclusions about the mass and momentum of the cars.
Test and Analyze
- Students use stopwatches to time their crash test runs. Time from the moment of release to the moment of impact. Measure the ramp from the point of release to the wall, in meters. Record distance and time measurements. Tips: Using a smartphone or tablet to video record the testing runs in slow motion is helpful for both timing and analysis. For more reliable data, have students run multiple trials, 3-5, and average their findings. Doing this will, however, have an effect on the car, but also serves as a good indicator for durability.
- After testing, students record whether their test runs were successful—egg did not crack and car structure remained intact—or not.
- Students calculate how fast their cars went and the momentum of the vehicles. They examine the data for relationships.
- As a class, briefly discuss any relationships seen between the heavier cars and the amount of destruction from the crash and the calculated momentum. Consider graphing the data. Have students write down their conclusions based on the class data—how mass affects momentum and the consequences of a front-end collision.
Redesign and Rebuild
- Teams brainstorm ways to fix the issues they found with their passenger compartment designs. For example, they might decide to beef up the roll bars, add more padding or create areas intended to compact without affecting the egg.
- After students/teams finish their redesigns/rebuilds, they measure the mass of their redesigned cars and add it to the individual or class chart.
Test and Analyze
- Students re-measure the ramp length and test their redesigned prototypes. Again, they use stopwatches to measure the time from vehicle release to impact.
- Students add their new results data to the chart. As a class, discuss the overall relationships of mass, momentum and observational data about the egg and crash. If you asked students to graph their data, look at the graphs to visually compare mass and momentum.
- Students write down their revised/final conclusions about what they think the data shows.
How does mass affect momentum in a head-on collision? As needed, guide student thinking about the relationship between greater mass, greater momentum and resulting damage to the passenger compartment and possible egg injuries. Did the mass, momentum or speed consistently help to predict the outcome of injury to the passenger egg or the vehicle damage? Expect students to see some correlation between mass and the amount of damage done; the greater the mass, the greater the damage.
How is the engineering design process used in real life? As an example, car buyers who are parents might want more safety features than single people, but those features cost more, so engineers design a range of different designs that match the desires of different buyers. All products—including sneakers, airplanes, computers, phones, video games—are the result of engineering design. Engineers follow the same guiding steps and problem solving techniques in order to invent new designs and improve existing designs for products, structures and systems that help to improve our lives. People do not usually think about following the design process steps beyond solving engineering challenges—but that is what most of us regularly do when coming up with solutions! We research to find information and figure out possible solutions, then we move forward with the best solution, testing and improving it as we go.
If the wooden base car models are too light to result in much of a crash, add the same number of metal washers to the bottoms of all the cars; this additional mass results in better crashes without changing the relative comparisons.
Pay attention to the ramp angle in relationship to the wheel size and adjust the ramp angle as necessary so the cars don’t bottom out during the crash test runs.
- For lower grades, build based on what students know about vehicles and then redesign based on the problems that arise. Do the speed and momentum calculations as a class.
- For higher grades, do not provide model base cars. Instead, have teams design and build entire model cars from supplied materials.
- Have students graph their individual/class data to visually compare mass with momentum.
- Add a materials costs column to the DPP table so students can figure the cost of materials and compare overall cost efficiency of the vehicles.
- Compare static vs. elastic collisions by having students run cars down opposing ramps into one another.
- Study the change of force and acceleration by pushing the cars with different forces and/or sending cars down a longer ramp.
Copyright© 2017 by Regents of the University of Colorado; original © 2016 North Dakota State University
ContributorsBeth Patterson; Kulm School; Jace Duffield, NDSU
Supporting ProgramRET Program, College of Engineering, North Dakota State University Fargo
This curriculum was developed in the College of Engineering’s Research Experience for Teachers: Engineering in Precision Agriculture for Rural STEM Educators program supported by the National Science Foundation under grant no. EEC 1542370. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
Last modified: February 12, 2020
|
In order to better Prepare for CBSE Class 9 exams Ribblu.com brings to you all the sample papers , revision notes and papers , previous years papers & worksheets of Mathematics for CBSE Class 9. These papers & worksheets are from various CBSE Schools across India and have been contributed and shared by various users of Ribblu.
The idea behind these CBSE Sample papers and worksheets is that they can act as instrumental for students in order to achieve maximum marks in their exams. They imbibe confidence amongst students and help them get ready to face their school examinations. These Papers and worksheets school wise, covers important concepts from an examination perspective. Students and parents can download all the available papers & worksheets directly in the form of PDF. One can use these papers and worksheets to get extensive practice and familiarise themselves with the format of the question paper
CBSE Class 9 Sample Papers & Sample Questions
CBSE Class 9 Revision Papers – Chapter Wise
CBSE Class 9 Maths Worksheets
NCERT Solutions For Class 9 Maths
Class 9 Maths Question Papers
- Class 9 Mathematics practice question paper 2 with solution
Class 9 Maths Revision Notes With Important Question Answers
CHAPTER 1 : REAL NUMBERS
Review of representation of natural numbers, integers and rational numbers on the number line. Representation of terminating / non terminating recurring decimals on the number line through successive magnification. Rational numbers as recurring/ terminating decimals. Operations on real numbers. Examples of non-recurring/non-terminating decimals. Existence of non-rational numbers (irrational numbers) such as √2, √3 and their representation on the number line. Explaining that every real number is represented by a unique point on the number line and conversely, viz. every point on the number line represents a unique real number. Existence of √x for a given positive real number x and its representation on the number line with geometric proof. Definition of nth root of a real number.
Recall of laws of exponents with integral powers. Rational exponents with positive real bases (to be done by particular cases, allowing learner to arrive at the general laws.)
CHAPTER 2 : POLYNOMIALS
Definition of a polynomial in one variable with examples and counter examples. Coefficients of a polynomial, terms of a polynomial and zeroes of polynomial. Degree of a polynomial. Constant, linear, quadratic and cubic polynomials. Monomials, binomials, trinomials. Factors and multiples. Zeros of a polynomial. Motivate and State the Remainder Theorem with examples. Statement and proof of the Factor Theorem. Factorization of ax2 + bx + c, a ≠ 0 where a, b and c are real numbers, and of cubic polynomials using the Factor Theorem. Recall of algebraic expressions and identities. Verification of identities:
and their use in factorization of polynomials.
CHAPTER 3: COORDINATE GEOMETRY
The Cartesian plane, coordinates of a point, names and terms associated with the coordinate plane, notations, plotting points in the plane.
CHAPTER 4: LINEAR EQUATIONS IN TWO VARIABLES
Recall of linear equations in one variable. Introduction to the equation in two variables. Focus on linear equations of the type ax+by+c=0. Prove that a linear equation in two variables has infinitely many solutions and justify their being written as ordered pairs of real numbers, plotting them and showing that they lie on a line. Graph of linear equations in two variables. Examples, problems from real life, including problems on Ratio and Proportion and with algebraic and graphical solutions being done simultaneously.
CHAPTER 5: INTRODUCTION TO EUCLID’S GEOMETRY
History – Geometry in India and Euclid’s geometry. Euclid’s method of formalizing observed phenomenon into rigorous Mathematics with definitions, common/obvious notions, axioms/postulates and theorems. The five postulates of Euclid. Equivalent versions of the fifth postulate. Showing the relationship between axiom and theorem, for example: (Axiom) 1. Given two distinct points, there exists one and only one line through them. (Theorem) 2. (Prove) Two distinct lines cannot have more than one point in common.
CHAPTER 6: LINES AND ANGLES
1. (Motivate) If a ray stands on a line, then the sum of the two adjacent angles so formed is 180 Degree and the converse.
2. (Prove) If two lines intersect, vertically opposite angles are equal.
3. (Motivate) Results on corresponding angles, alternate angles, interior angles when a transversal intersects two parallel lines.
4. (Motivate) Lines which are parallel to a given line are parallel.
5. (Prove) The sum of the angles of a triangle is 180 Degree .
6. (Motivate) If a side of a triangle is produced, the exterior angle so formed is equal to the sum of the two interior opposite angles.
1. (Motivate) Two triangles are congruent if any two sides and the included angle of one triangle is equal to any two sides and the included angle of the other triangle (SAS Congruence).
2. (Prove) Two triangles are congruent if any two angles and the included side of one triangle is equal to any two angles and the included side of the other triangle (ASA Congruence).
3. (Motivate) Two triangles are congruent if the three sides of one triangle are equal to three sides of the other triangle (SSS Congruence).
4. (Motivate) Two right triangles are congruent if the hypotenuse and a side of one triangle are equal (respectively) to the hypotenuse and a side of the other triangle. (RHS Congruence)
5. (Prove) The angles opposite to equal sides of a triangle are equal.
6. (Motivate) The sides opposite to equal angles of a triangle are equal.
7. (Motivate) Triangle inequalities and relation between ‘angle and facing side’ inequalities in triangles.
CHAPTER 8: QUADRILATERALS
1. (Prove) The diagonal divides a parallelogram into two congruent triangles.
2. (Motivate) In a parallelogram opposite sides are equal, and conversely.
3. (Motivate) In a parallelogram opposite angles are equal, and conversely.
4. (Motivate) A quadrilateral is a parallelogram if a pair of its opposite sides is parallel and equal.
5. (Motivate) In a parallelogram, the diagonals bisect each other and conversely.
6. (Motivate) In a triangle, the line segment joining the mid points of any two sides is parallel to the third side and is half of it and (motivate) its converse.
CHAPTER 9: AREA OF PARALLELOGRAM AND TRIANGLES
Review of concept of Area, Revision of Area of rectangle.
1. (Prove) Parallelograms on the same base and between the same parallels have the same area.
2. (Motivate) Triangles on the same (or equal base) base and between the same parallels are equal in area.
CHAPTER 10: CIRCLES
Through examples, arrive at definition of circle and related concepts radius, circumference, diameter, chord, arc, secant, sector, segment, subtended angle.
1. (Prove) Equal chords of a circle subtend equal angles at the centre and (motivate) its converse.
2. (Motivate) The perpendicular from the centre of a circle to a chord bisects the chord and conversely, the line drawn through the centre of a circle to bisect a chord is perpendicular to the chord.
3. (Motivate) There is one and only one circle passing through three given non-collinear points.
4. (Motivate) Equal chords of a circle (or of congruent circles) are equidistant from the centre (or their respective centres) and conversely.
5. (Prove) The angle subtended by an arc at the centre is double the angle subtended by it at any point on the remaining part of the circle.
6. (Motivate) Angles in the same segment of a circle are equal.
7. (Motivate) If a line segment joining two points subtends equal angle at two other points lying on the same side of the line containing the segment, the four points lie on a circle.
8. (Motivate) The sum of either of the pair of the opposite angles of a cyclic quadrilateral is 180° and its converse.
CHAPTER 11: CONSTRUCTIONS
Construction of bisectors of line segments and angles of measure 60°, 90° , 45° etc., equilateral triangles. Construction of a triangle given its base, sum/difference of the other two sides and one base angle. Construction of a triangle of given perimeter and base angles.
CHAPTER 12: HERON’S FORMULA (AREA)
Area of a triangle using Heron’s formula (without proof) and its application in finding the area of a quadrilateral.
CHAPTER 13: SURFACE AREAS AND VOLUMES
Surface areas and volumes of cubes, cuboids, spheres (including hemispheres) and right circular cylinders/cones.
CHAPTER 14: STATISTICS
Introduction to Statistics: Collection of data, presentation of data — tabular form, ungrouped / grouped, bar graphs, histograms (with varying base lengths), frequency polygons. Mean, median and mode of ungrouped data.
CHAPTER 15: PROBABILITY
History, Repeated experiments and observed frequency approach to probability. Focus is on empirical probability. (A large amount of time to be devoted to group and to individual activities to motivate the concept; the experiments to be drawn from real – life situations, and from examples used in the chapter on statistics)
CLASS 9 MARKING SCHEME ( CBSE 2020 )
|VI||STATISTICS & PROBABILITY||10|
|
Have you ever noticed how mathematical ideas are often used in patterns that we see all around us? This article describes the life of Escher who was a passionate believer that maths and art can be. . . .
Geometry problems at primary level that require careful consideration.
Geometry problems for primary learners to work on with others.
Make an equilateral triangle by folding paper and use it to make patterns of your own.
Geometry problems at primary level that may require resilience.
How much do you have to turn these dials by in order to unlock the safes?
Have a good look at these images. Can you describe what is happening? There are plenty more images like this on NRICH's Exploring Squares CD.
Can you describe the journey to each of the six places on these maps? How would you turn at each junction?
Geometry problems for inquiring primary learners.
Pythagoras of Samos was a Greek philosopher who lived from about 580 BC to about 500 BC. Find out about the important developments he made in mathematics, astronomy, and the theory of music.
How many times in twelve hours do the hands of a clock form a right angle? Use the interactivity to check your answers.
This investigation explores using different shapes as the hands of the clock. What things occur as the the hands move.
Draw some angles inside a rectangle. What do you notice? Can you prove it?
Make a clinometer and use it to help you estimate the heights of tall objects.
This task looks at the different turns involved in different Olympic sports as a way of exploring the mathematics of turns and angles.
Can you draw perpendicular lines without using a protractor? Investigate how this is possible.
Watch this film carefully. Can you find a general rule for explaining when the dot will be this same distance from the horizontal axis?
How did the the rotation robot make these patterns?
An activity for high-attaining learners which involves making a new cylinder from a cardboard tube.
Join pentagons together edge to edge. Will they form a ring?
What shapes should Elly cut out to make a witch's hat? How can she make a taller hat?
Interior angles can help us to work out which polygons will tessellate. Can we use similar ideas to predict which polygons combine to create semi-regular solids?
Can you make a right-angled triangle on this peg-board by joining up three points round the edge?
Semi-regular tessellations combine two or more different regular polygons to fill the plane. Can you find all the semi-regular tessellations?
Explore patterns based on a rhombus. How can you enlarge the pattern - or explode it?
Can you use LOGO to create a systematic reproduction of a basic design? An introduction to variables in a familiar setting.
How good are you at estimating angles?
On a clock the three hands - the second, minute and hour hands - are on the same axis. How often in a 24 hour day will the second hand be parallel to either of the two other hands?
Shogi tiles can form interesting shapes and patterns... I wonder whether they fit together to make a ring?
Which hexagons tessellate?
At the time of writing the hour and minute hands of my clock are at right angles. How long will it be before they are at right angles again?
Can you find triangles on a 9-point circle? Can you work out their angles?
Jennifer Piggott and Charlie Gilderdale describe a free interactive circular geoboard environment that can lead learners to pose mathematical questions.
Can you use LOGO to create this star pattern made from squares. Only basic LOGO knowledge needed.
What is the relationship between the angle at the centre and the angles at the circumference, for angles which stand on the same arc? Can you prove it?
Use your knowledge of angles to work out how many degrees the hour and minute hands of a clock travel through in different amounts of time.
Suggestions for worthwhile mathematical activity on the subject of angle measurement for all pupils.
Where will the point stop after it has turned through 30 000 degrees? I took out my calculator and typed 30 000 ÷ 360. How did this help?
Construct two equilateral triangles on a straight line. There are two lengths that look the same - can you prove it?
A metal puzzle which led to some mathematical questions.
During the third hour after midnight the hands on a clock point in the same direction (so one hand is over the top of the other). At what time, to the nearest second, does this happen?
|
Recall math facts
For each new math fact that we calculate, we get at least one more math fact for free:
Gives us and .
Similarly, we can calculate
This gives us the related fact
In this section, it is useful to recall common exponent facts in order to evaluate roots. Consider completing this Reference Sheet for Exponents. Refer to this sheet rather than a calculator for common values.
Roots and Exponents
Let’s compare what we know about roots and exponents. We know the exponent law that says . We also know that
We can write that:
In other words, and are the same number. You can type them both on your calculator and get the same answer.
Extending the idea, we have
In other words,
And in general:
What about a number that has a power and a root?
Consider a number such as
Note that this can also be written as .
Now, as an exponent, the cube root is the same as the exponent , the first part reads:
The second part is, ‘raise to the power 4’:
The an exponent law that reminds us what do to here: . So we can write:
When it comes to converting from root notation to exponent notation or vice versa:
The idea of powers and roots has been around for a very long time, but the way we write them down and communicate them has changed over the years. You can see from this table, called ‘the origins of some mathematical symbols‘ that the square root symbol, has been around since 1524; and exponents as we now use them showed up 100 years later, they have been around since 1637.
We now use both notations interchangeably, but one can note a preference for root notation in geometry and for exponent notation in calculus.
Evaluating fraction exponents
Generally, a fraction exponent leads to an irrational number. The answers to these examples are integers or rational numbers, and can be evaluated without using a calculator.
What is the second (square) root of 16?
What is the fourth root of 16? For the fourth root you can square root then square root again:
First, we need to know that .
This is how we handle it:
We know that .
And we know that
This is why we can rewrite as
In this form, we first evaluate the bracket from recall or the reference sheet then calculate the power:
Alternatively, convert to root notation then compute:
We mean, take the cube root of 125 then square it.
We mean, take the square root of 64 then cube.
The reference sheet shows that . Multiplying by 4 one more time gives .
Evaluate negative exponents
Division can be expressed using or by using a fraction or by using a negative exponent. A negative exponent does not make the value negative because dividing does not make a value negative.
1 divided by 7 can be written as:
Note that all of these are positive (all are equal to ). Generally the first thing we do with a negative exponent (which means ‘divide’) is to write it as a fraction.
A negative exponent represents repeated division.
Fractions with Negative Exponents
In examples 8, 9 and 10 we take a negative power of a fraction. To understand the process, you need to remember how to divide a fraction. Here’s a reminder:
Watch what happens to the fraction when there is a negative power.
Notice that our original fraction flips when the exponent is negative. Let’s write that down in general:
|
From Term 1 2017, Victorian government and Catholic schools will use the new Victorian Curriculum F-10. Curriculum related information is currently being reviewed and may be subject to change.
For more information on the curriculum, see:
The Victorian Curriculum F–10 - VCAA
Level 3: Multiplicative Thinking
Show Card 1 and say, “This is a different way of showing what we get when we add up these numbers.” Indicate 3, 2 and 4, then point to 9 and say, “Do you agree that if we added these numbers up this will be the answer?” Indicate 9. If the student agrees and appears to understand the task, proceed to Card 2.
Place Card 2 in front of the student and ask, “The answer is missing from this card. Can you add up the numbers to find the answer please?… How did you work that out?” Note student’s response then remove the card. If answered relatively easily proceed to Card 3, otherwise stop at this point.
Place Card 3 in front of the student and say: “This time the answer is there (point to 24), but one of the numbers is missing. Can you work out what number is missing please? … How did you work that out?”
If this was done relatively easily, remove Card 3 and place Card 4 in front of the student. Ask, “What do you think needs to be done here? … Can you do that for me please? … Can you tell me how you did that?” If student hesitates or find this difficult, stop and try to find out why.
If this was done easily, proceed to Card 5 and repeat the questions.
3.2 Advice rubric
This task assumes some facility with addition and subtraction facts to 20. It examines the extent to which students have access to efficient mental strategies to add and subtract 1 and 2 digit numbers to 30 and beyond, which is an important pre-requisite for developing efficient mental strategies for the multiplication facts to 100.
||Interpretation/Suggested teaching response|
Little/no response to Card 1, counts to 9 by ones or uses fingers
May not understand task and/or ‘trust the count’ for single-digit numbers shown, may not be able to keep track of the count
- Use subitisation cards to check part-part-whole understanding for numbers to 5 and the extent to which students can recognise and work with numerals to 5 without having to model or count by ones (trusting the count)
- Build number fact knowledge (and trusting the count) to 10 using subitisation cards and ten-frames (ie, recognise 7 is 5 and 2 more, 3 and 4, 1 more than 6, and so on).
- Use two large dot dice and/or ten-frame cards to model counting on (ie, cover number that is known and count on by ones), extend to counting on by 1, 2 or 3 mentally
- Use 2-row bead frames and ten frames to build doubles knowledge to 20
Experiences difficulty with Card 2 (eg, takes a long time, uses fingers, taps, nods), and/or indicates that they counted by ones
Suggests little or no access to mental strategies beyond count on by 1, 2, or 3 from larger
- If Students can subitise numbers up to 5, continue building part-part-whole knowledge of numbers to 10 as above, using subitisation cards and ten-frames eg 17 is 1 ten and 7 ones
- Develop more efficient addition strategies for number facts to 20: count-on, count-on-from-larger, doubles and near doubles (eg, for 8 and 9, double 8 is 16 and one more is 17), make-to-ten (eg, for 6 and 8, simultaneously recognise 8 is 2 less than 10 and 6 is 2 and 4, so 8. 10. 14)
- Use ‘thinking strings’ to model addition of three or more digits (eg, for 8 and 5 and 7, record: “8, 10, 13, 20”)
Able to find the sum (23) for Card 2 and the missing number (12 ) for Card 3 but experiences difficulty with Card 4
Suggests a knowledge of addition facts to 20 and/or access to relatively efficient mental strategies for adding and subtracting single digit numbers.
- Extend doubling/near doubling strategies to 2-digit numbers emphasising the count of tens (eg, for double 24, think: double 2 tens is 4 tens , double 4 ones is 8 ones, so 48)
- Use open number lines and thinking strings to extend the ‘make-to-ten’ strategy, eg, for 26 and 7 and counting in place-value parts, eg, for 36 and 47, start at 47, count on 3 tens, 87, and 6 more, 87, 90, 93
- Use similar cards to model and develop ‘inspection’ strategies for adding and subtracting 3 two-digit numbers (eg, for Card 4, 3 tens and 7 tens is 10 tens and 5 more tens is 15 tens, 8 and 2 gives 1 more en 16 tens and 4 more, 164)
- Introduce/consolidate column addition for 2 or more addends and trading strategies to support written recording for subtraction
- Consider introducing array-based strategies for multiplication facts, eg, for 2s facts (2 ones, 2 twos, 2 threes, …) think doubles, for 3s facts (3 ones, 3 twos, 3 threes, …) think double the group and one more group etc (See Developing the Big Ideas in Number (PDF - 125Kb) (pdf - 125.23kb))
Completes all cards reasonably efficiently
Indicates sound knowledge of addition and subtraction facts and access to flexible mental strategies for adding and subtracting 2-digit numbers.
- Consolidate written recording for addition and subtraction with regrouping and trading
- Extend multiplication strategies to 100 and beyond (eg, 9 twenty-threes, think: less than 10 twenty-threes (230), one group less, 207)
|
Table of contents:
- What is Pythagorean theorem used for?
- Is c2 always the hypotenuse?
- For which triangle does the equation 42 42 c2 apply?
- Can Pythagorean theorem have decimals?
- How would you write the Pythagorean theorem equation for this right triangle?
- When can you use a 2 B 2 C 2?
- What does a 2 B 2 C 2 mean?
- How do you prove a square b squared?
- What is a2 b2?
- What is a B whole square?
- What is the formula of a square?
- How do I calculate square meters?
- Why is area in square units?
- What is the symbol of square meter?
- How do you explain square units?
- How do we calculate area?
- Is Triangle a formula?
- What is the square area of a rectangle?
- What is the formula for area of the triangle?
- What is S in Triangle?
- What is the area of the right triangle?
- What is area of square?
- What is square the company?
What is Pythagorean theorem used for?
Given two straight lines, the Pythagorean Theorem allows you to calculate the length of the diagonal connecting them. This application is frequently used in architecture, woodworking, or other physical construction projects. For instance, say you are building a sloped roof.
Is c2 always the hypotenuse?
The Pythagorean Theorem is a formula that gives a relationship between the sides of a right triangle The Pythagorean Theorem only applies to RIGHT triangles. ... NOTE: The side “c” is always the side opposite the right angle. Side “c” is called the hypotenuse. The sides adjacent to the right angle are named “a” and “b”.
For which triangle does the equation 42 42 c2 apply?
Step-by-step explanation: By Pythagoras theorem: In a right triangle, the square of hypotenuse is equal to the sum of the squares of other two sides.
Can Pythagorean theorem have decimals?
See, Pythagorean triples are the integers that fit the formula for the Pythagorean Theorem. These are whole numbers that can't be decimals.
How would you write the Pythagorean theorem equation for this right triangle?
The Pythagorean theorem consists of a formula a^2+b^2=c^2 which is used to figure out the value of (mostly) the hypotenuse in a right triangle. The a and b are the 2 "non-hypotenuse" sides of the triangle (Opposite and Adjacent).
When can you use a 2 B 2 C 2?
It works the other way around, too: when the three sides of a triangle make a2 + b2 = c2, then the triangle is right angled.
What does a 2 B 2 C 2 mean?
Pythagorean triples A Pythagorean triple has three positive integers a, b, and c, such that a2 + b2 = c2. In other words, a Pythagorean triple represents the lengths of the sides of a right triangle where all three sides have integer lengths.
How do you prove a square b squared?
Draw a small square with the side of units at any corner of the square. So, the area of small square is . Now, subtract the square, whose area is from the square, whose area is . It forms a new geometric shape and its area is equal to a 2 − b 2 .
What is a2 b2?
a2 +b2 = c2.
What is a B whole square?
The square of this expression is written as ( a − b ) 2 in mathematical form and it is expanded as a 2 − 2 a b + b 2 mathematically.
What is the formula of a square?
Formula for geometrical figures
|Square||4 × side|
|Rectangle||length × width|
|Parallelogram||base × height|
|Triangle||base × height / 2|
How do I calculate square meters?
Multiply the length and width together. Once both measurements are converted into metres, multiply them together to get the measurement of the area in square metres.
Why is area in square units?
Area is measured in "square" units. ... Since each side of a square is the same, it can simply be the length of one side squared. If a square has one side of 4 inches, the area would be 4 inches times 4 inches, or 16 square inches. (Square inches can also be written in2.)
What is the symbol of square meter?
The square metre (international spelling as used by the International Bureau of Weights and Measures) or square meter (American spelling) is the SI derived unit of area with symbol m2.
How do you explain square units?
A unit square is a square with sides measuring 1 unit, while square units is a unit of measurement.
How do we calculate area?
The simplest (and most commonly used) area calculations are for squares and rectangles. To find the area of a rectangle, multiply its height by its width. For a square you only need to find the length of one of the sides (as each side is the same length) and then multiply this by itself to find the area.
Is Triangle a formula?
The area of a triangle is equal to: (the length of the altitude) × (the length of the base) / 2. In ∆ABC, BD is the altitude to base AC and AE is the altitude to base BC. Triangle Formula: The area of a triangle ∆ABC is equal to ½ × BD × AC = ½ × 5 × 8 = 20. The triangle area is also equal to (AE × BC) / 2.
What is the square area of a rectangle?
Area of Rectangle Formula
|The formula for the Area of a Rectangle|
|The perimeter of a Rectangle||P= 2(l + b)|
|Area of a Rectangle||A = l × b|
What is the formula for area of the triangle?
The area A of a triangle is given by the formula A=12bh where b is the base and h is the height of the triangle.
What is S in Triangle?
There are several ways to compute the area of a triangle. ... Another is Heron's formula which gives the area in terms of the three sides of the triangle, specifically, as the square root of the product s(s – a)(s – b)(s – c) where s is the semiperimeter of the triangle, that is, s = (a + b + c)/2.
What is the area of the right triangle?
Every right triangle can be visualized as one half of a rectangle. That's why we can calculate their area by multiplying one-half the base times the height.
What is area of square?
In other words, the area of a square is the product of the length of each side with itself. That is, Area A = s x s where s is the length of each side of the square. For example, the area of a square of each side of length 8 feet is 8 times 8 or 64 square feet.
What is square the company?
Square, Inc. is an American financial services, merchant services aggregator, and mobile payment company based in San Francisco, California. The company markets software and hardware payments products and has expanded into small business services.
- Who gave 0 to the world?
- What is Pythagoras of Samos famous for?
- What did Einstein say about vegetarianism?
- Why did Pythagoras hate beans?
- Which brand yoga mat is best?
- What are the three types of means?
- How do you find the area of a 90 degree triangle?
- What is the meaning of Pythagoras?
- What were the contributions of Pythagoras?
- What belief system was developed by Pythagoreans?
You will be interested
- What do Pythagorean mean?
- How many proofs of Pythagorean theorem are there?
- What influence did Pythagoras have on the beliefs we have about numbers?
- How do you find Pythagorean in numerology?
- Was Einstein a vegetarian?
- What is the rule for Pythagoras Theorem?
- What is the formula to find area and perimeter?
- How do you find Pythagoras of a triangle?
- Is there 24 hours in a day in 2021?
- Why is 3 a divine number?
|
The loss of species during the Holocene was, dramatically more important on islands than on continents. Seabirds from islands are very vulnerable to human-induced alterations such as habitat destruction, hunting and exotic predators. For example, in the genus Puffinus (family Procellariidae) the extinction of at least five species has been recorded during the Holocene, two of them coming from the Canary Islands.
We used bones of the two extinct Canary shearwaters (P. olsoni and P. holeae) to obtain genetic data, for use in providing insights into the differentiation process within the genus Puffinus. Although mitochondrial DNA (mtDNA) cytochrome b sequences were successfully retrieved from four Holocene specimens of the extinct Lava shearwater (P. olsoni) from Fuerteventura (Canary Islands), the P. holeae specimens yielded no DNA. Only one haplotype was detected in P. olsoni, suggesting a low genetic diversity within this species.
The phylogenetic analyses based on the DNA data reveal that: (i) the “Puffinus puffinus complex”, an assemblage of species defined using osteological characteristics (P. puffinus, P. olsoni, P. mauretanicus, P. yelkouan and probably P. holeae), shows unresolved phylogenetic relationships; (ii) despite the differences in body size and proportions, P. olsoni and the extant P. puffinus are sister species. Several hypotheses can be considered to explain the incipient differentiation between P. olsoni and P. puffinus.
Citation: Ramirez O, Illera JC, Rando JC, Gonzalez-Solis J, Alcover JA, et al. (2010) Ancient DNA of the Extinct Lava Shearwater (Puffinus olsoni) from the Canary Islands Reveals Incipient Differentiation within the P. puffinus Complex. PLoS ONE 5(12): e16072. doi:10.1371/journal.pone.0016072
Editor: M. Thomas P. Gilbert, Natural History Museum of Denmark, Denmark
Received: October 20, 2010; Accepted: December 5, 2010; Published: December 31, 2010
Copyright: © 2010 Ramirez et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.
Funding: J.C.I. was funded by a Ramón y Cajal fellowship (Spanish Ministry of Education). C.L-F. and O.R. were supported by a grant from the Ministerio de Ciencia e Innovación from Spain. J.A.A. and J.C.R. were partially supported by Spanish Dirección General de Investigación Científica y Técnica, Research Project CGL2007-62047/BTE. C.L-F. and O.R. were supported by Spanish Dirección General de Investigación Científica y Técnica, Research Project BFU2009-06974. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Competing interests: The authors have declared that no competing interests exist.
In the recent history of the planet, humans have been a major underlying factor in determining extinction rates. In fact, the ongoing annihilation of vast numbers of species is known as the Holocene extinction . In general, the ensuing loss of biodiversity is dramatically more pronounced on islands, than continents, as islands often have a higher number of endemic species per unit area, and specific adaptations of their biota that predispose them to extinction, including tameness, site faithfulness, flightlessness and reduced fecundity –. During the Holocene, more than 20 seabird extinctions and a higher number of local extirpations have been documented on islands around the world –. Phylogenetic relationships and causes of extinctions are often difficult to unravel, but recent studies using ancient DNA have greatly improved our understanding on the evolutionary history of these extinct species (e.g., ).
In most cases, decrease of distribution ranges or extinction has been related to human arrivals causing habitat destruction, hunting pressure and the introduction of exotic predators ,,,,,. Among seabirds, albatrosses and petrels (procellariforms) are particularly vulnerable to extinction due to their high breeding site fidelity, and lack of effective anti-predator behaviour (e.g., ). These species usually breed on islands free of predators. Thus, when predators are introduced, their limited behavioural plasticity becomes, in essence, an evolutionary trap, that can easily lead to extinction (e.g.,). In fact, during the last 10,000 years, 56% of Holocene procellariform species have lost populations, and five Puffinus shearwater species with unclear evolutionary relationships have been reported to be extinguished. The reason for this is usually claimed to be the human arrival to the islands they inhabited ,,,. The genus Puffinus (family Procellariidae) is a diverse group of small and medium size birds (wings spanning 1.5–0.6 meters and weight of 170–700 grams) with a worldwide distribution –. Although in general recent phylogenetic studies of the group, based on mitochondrial gene trees of extant species, support previous morphological-based classifications –, the phylogenetic relationships among some of the clades are still not well understood. For example, some of the monophyletic lineages such as P. lherminieri, P. baroli and P. puffinus, P. yelkouan, P. mauretanicus form unresolved polytomies within the genus Puffinus . Such results could be explained by a recent diversification, lineage extinction and incomplete sampling of extant taxa favoured by the remarkable philopatry exhibited by shearwater populations –.
The Dune (P. holeae) and Lava shearwater (P. olsoni) are two of the shearwater species that became extinct during the Holocene –. These are known to be former breeders in the Canary Islands, together with two other Puffinus species, the Manx shearwater (P. puffinus) (Figure 1) and the Little shearwater (P. baroli), which currently show a patchy distribution in the Canary Islands . Distributions of P. holeae and P. olsoni were restricted to the Eastern Canary Islands (i.e., Lanzarote, Fuerteventura and islets around) – (Figure 1). According to the areas where bones were collected it is probable that they displayed different breeding behaviours. Bones of P. olsoni are abundant in caves located at recent lava fields , whereas remains of P. holeae are abundant at aeolianite formations or fossil dunes ,.
The location of the sites where the bones for this work were collected is also showed. (a): archaeological site; (p): paleontological site.
It has been suggested that the extinction of P. holeae was directly linked to the aboriginal colonization of the Canary Islands, its last known record have been dated to 1,159±790 calibrated years (yr) before present (BP) . P. holeae bones from sites located in the south of Fuerteventura are much older, dating to the Upper Pleistocene. Direct radiocarbon age on eggshells from one of these sites yielded an age of 32,100±1,100 14C yr BP .
In contrast, the known assemblage of P. olsoni remains is holocenic . According to archaeological evidence, P. olsoni was also exploited as a food resource by the aboriginal Canarian people , but the extinction of this shearwater took place after 1270 AD, that is, more than one millennium after the arrival of the pre-Hispanic settlers. It has been suggested that the introduction of exotic mammals such as rats and cats after the European arrival to the Canary archipelago (14th century) was the most probable cause of its extinction .
The morphological traits of the two extinct shearwaters have been thoroughly examined in relation to extant shearwaters. P. olsoni was intermediate in size (estimated weight range: 175–245 g; J.C.R., unpublished data), between P. baroli (170–225 g) and P. puffinus (375–459 g) . P. holeae was larger than P. olsoni, with intermediate size (estimated weight range: 508–597 g; J.C.R., unpublished data) between P. puffinus and Cory's Shearwater, Calonectris diomedea (800–1,100 g) . Irrespective to the differences in body sizes, some osteological traits (especially skull features) suggest that both extinct species were closely related to either P. mauretanicus or to P. puffinus –. However, their evolutionary relationships are still unclear and no attempt to reconstruct their phylogeny by means of molecular tools has been undertaken so far.
The aim of this study was to use ancient mtDNA sequences from bones of both Canary extinct species to: (i) to investigate their phylogenetic relationships within the shearwater group and estimate their divergence times; and (ii) to compare the phylogenetic information with the osteological characters in order to determine whether morphological differentiation is coincident with the genetic affinities obtained.
Materials and Methods
Fourteen bone fragments, from a minimum of five P. olsoni individuals, and 10 forelimb and hindlimb bone fragments (humerus, ulna, tibiotarsus and tarsometarsus), from a minimum of four P. holeae individuals, were used for DNA extraction. Materials were identified through direct comparison with bones of both species from the collections at the Zoology Department of La Laguna University (DZUL).
In order to increase the likelihood of recovering the maximum genetic variability of P. olsoni, three sites in Fuerteventura were selected for DNA analysis (Figure 1): three humeri plus one ulna, deriving from at least two specimens from Cueva de Las Palomas palaeontological site; one femur, one fragment of radius, one vertebra, four fragments of humerus plus one ulna, from at least two specimens from Cueva de Las Moscas archaeological site; and two humeri. from Cueva de La Laguna archaeological site (Figure 1). All samples were collected at the surface level. The recent aspect of the remains and the 14C ages of bones of this species from the two mentioned archaeological sites (1,290–1,440 and 750–969 calibrated yr respectively) indicates a late Holocene age. No chronological information exists on bones from Cueva de Las Palomas, but based on the recent geological age of this volcano the materials are estimated to be <10,000 years old. No material of putatively recent P. holeae are available for DNA analysis. Bones used for the extractions were collected at the site called Huesos del Caballo in the south of Fuerteventura. P. holeae eggshells collected from this paleontological site were previously dated to the Upper Pleistocene .
Puffinus genomic DNA was isolated from bone powder in dedicated ancient DNA laboratories at the Institute of Evolutionary Biology (IBE) and at the University Pompeu Fabra (UPF) in Barcelona, by a proteinase-K extraction followed by a phenol-cloroform extractuib protocol and a Centricon-100 concentration column (Amicon), as described elsewhere . No previous work with extant shearwaters had been conducted at these laboratories.
Puffinus specific primers were designed to amplify a fragment of 484 base pairs (bp) of the mitochondrial DNA (mtDNA) cytochrome b (cyt-b) gene. This was achieved through the amplification of five overlapping fragments of 173, 177, 119, 119 and 119 bp respectively, using a two-step PCR protocol . Additionally, after unsuccessful amplifications, two shorter fragments of 75 and 102 bp were also tested in order to account for possible DNA degradation. Sequences of primers used for amplifying each one of the fragments targeted are reported in Table 1. Amplified products were purified with a gene clean silica method using the DNA Extration Kit (Fermentas, USA) and cloned using the Topo TA cloning kit (Invitrogen, The Netherlands). Insert-containing colonies were subjected to 30 cycles of PCR with M13 universal primers and subsequently sequenced with an Applied BioSystems 3100 DNA sequencer, at the Servei de Seqüenciació of the Universitat Pompeu Fabra (Barcelona).
The ancient mtDNA cyt-b sequences obtained were compared to a dataset of 87 mtDNA cyt-b sequences originating from 34 extant species of the genus Puffinus gathered from NCBI GenBank (Table 2). Additionally, cyt-b sequences from Calonectris diomedea, Lugensa brevirostris, Bulweria bulwerii, Diomedea epomophora, D. exulans, Oceanodroma furcata, O. leucorhoa, Pterodroma axillaris and Struthio camelus were also obtained from GenBank to be used as outgroups (Table 2). Recent phylogenetic studied carried out with seabirds in the north Atlantic archipelagos – have estimated divergence times between lineages using the Kimura-2 correction, and suggest a mutation rate of 0.9% per million of years (mya) can be used for Procellariidae . In order to compare our estimate with previous studies, genetic distances (corrected by the Kimura's two parameter evolution model) among taxa were obtained using MEGA 4.0 . Then, divergence times were estimated using the aforementioned mutation rate of 0.9% per mya. Phylogenetic reconstruction was performed with Mr. Bayes 3.1.2. ,. The tree was rooted at the most phylogenetic distant outgroup species, Struthio camelus. The best model of nucleotide substitution was chosen by using the Bayesian Information Criteria model selection implemented in the program jModelTest version 0.1.1 . Posterior distributions were obtained by four independent Monte Carlo Markov Chains (MCMCs), that included three heated chains and one cold chain of 10,000,000 iterations with the temperature set 0.2 each were run, and trees and model parameters were sampled every 1,000 generations. The convergence of the MCMCs was verified visually from the likelihood values but also we assessed the convergence with TRACER v. 1.5 . The first quarter of sampled trees was discarded as burn-in, and the inference was drawn from the remaining trees. We repeated all MCMCs analyses twice in order to ensure the posterior probabilities were stable.
Results and Discussion
The partial sequence (484 bp) of the cyt-b gene was obtained from four out of five specimens of P. olsoni (Figure S1). However, the P. holae samples yielded no successful amplifications (from the Huesos del Caballo site). This failure is not surprising, since these samples are the oldest tested, and the warm climatic conditions of the Canary Islands are highly unfavourable for long-term DNA preservation. Therefore, it can be expected a priori that many Holocene and pre-Holocene remains will have low or null endogenous DNA content. For instance, another Holocene specimen (Myotragus balearicus) from an equally unfavorable Mediterranean environment yielded only 0.27% endogenous DNA, as detected through unspecific shotgun sequencing . Additionally, the remains of P. holeae come from a palaeontologic site exposed to climatic factors (rain, wind, sunlight), while those of P. olsoni come from caves, where the effect of these damaging agents is minimized.
The DNA sequences from the four P. olsoni samples represent a unique and exclusive haplotype of the mtDNA cyt-b gene (Figure 2, Figure S1). Three of the four P. olsoni-specific substitutions correspond to either C to T or G to A transitions, which are commonly associated to DNA damage . Nevertheless, we are confident of the truthfulness of these substitutions because: 1) they are reproducible among the four samples, 2) DNA from one Puffinus sample (P. olsoni 1) was independently extracted, amplified and sequenced in two dedicated ancient DNA laboratories for their authentication, and 3) about 50% of the fragments for each specimen have been replicated twice (Figure S1). Additional substitutions in one or few clones that are only present in one particular PCR but not in another PCR from the same sample can reasonably be attributed to DNA damage , and thus were not considered in the phylogenetic analyses.
Numbers above nodes show the Bayesian posterior probability (>0.7). Letters show nodes discussed in the text. Cranium and humerus of P. olsoni (Holotype and Paratype; DZUL 2000 and 1903) and P. puffinus (DZUL 2756) are displayed to highlight the size differences between these sister taxa.
jModelTest selected the Hasegawa Kishino Yano model (HKY +I+G). We confirmed with TRACER the concordance between runs obtained with the Bayesian inference. All parameters had effective sample size values above 240. The general topology obtained by performing Bayesian inference (Figure 2) supports previous phylogenetic assessment of the shearwaters, performed using different optimality criteria –. Our results do not seems to support the monophyly of the genus Puffinus, since Calonectris diomedea is grouped together to all Puffinus species with high nodal support (Figure 2, node A). The Bayesian Inference supports a monophyletic group of seven species (P. tenuirostris, P. gravis, P. griseus, P. creatopus, P. carneipes, P. bulleri and P. pacificus) that are a distinct and ancient lineage (node B). In contrast, C. diomedea, P. subalaris and P. nativitatis lineages show unresolved phylogenetic relationships to the rest of shearwaters species analyzed (nodes C, D, E and F). All Puffinus species included in node F are grouped together with high nodal support. The four P. olsoni individuals constitue a monophyletic clade with the five P. puffinus individuals, as supported by high Bayesian posterior probabilities (node G). However, the inclusion of P. olsoni in the phylogenetic analysis was unable to resolve the position of the P. puffinus-P. olsoni clade (node G) with respect to the large and monophyletic clade containing 27 Puffinus species (node H). This lack of resolution could be attributed to the rapid origin and radiation of this clade (node G) from the respective lineages within the monophyletic Puffinus group (node H).
Using the previously estimated mutation rate for Procellariidae of 0.9% per million of years , the time of the most recent common ancestor (MRCA) for P. olsoni and P. puffinus was estimated to be 600,000±400,000 years. Interestingly, the time for the split between P. puffinus and P. olsoni might be close to the diversification time estimated for the Cory's shearwater Palearctic clade (900,000–700,000 years ago) . Recent phylogeographic studies have showed the influence of past climatic and geologic events on the patterns of genetic structure of many seabird species (e.g., ,,–). Variation in marine productivity related to accessibility, availability and prey size could have produced specialization in foraging strategies and limited gene flow among seabird populations, thus favouring differentiation and speciation processes by allopatry and sympatry ,.
According to some morphological traits, P. olsoni should be included within the so-called “Puffinus puffinus complex” (i.e. P. puffinus, P. mauretanicus, P. yelkouan and, probably, P. holeae). Puffinus olsoni is characterized by a lower and less bulky skull than their relatives. The premaxillary is very elongated with upper edges of the orbits being highly parallel and they display wide but flat humerus . The mitochondrial DNA sequences suggest that the osteological affinities within this complex are not congruent with their phylogenetic relationships, due to the fact that these four taxa are not reciprocally monophyletic (Figure 2). Nevertheless, our results do indicate that, despite the conspicuous differences in size and proportions , P. puffinus and P. olsoni are sister species. Because both Puffinus species inhabited the Canary Islands (Figure 1), the recent split between P. puffinus and P. olsoni reveals an incipient differentiation process interrupted by the extinction of P. olsoni. Some authors have suggested that the diversification process within the Madeiran storm-petrel (Oceanodroma castro), and perhaps in other seabird species, could be explained by allochrony (separation of populations by reproduction time). The timing of breeding within these seabirds varies among populations inhabiting the same archipelago. Albeit at present it is not possible to test this hypothesis with the extinct P. olsoni, a similar process could explain the recent split and genetic differentiation between this species and the sympatric P. puffinus in the Canary Islands. The differentiation process might have been favoured by the fact that both seabirds probably selected different habitats for nesting. P. puffinus selects laurel forest for nesting, but P. olsoni likely selected caves of lava fields in the semi-arid islands of the Canary archipelago –. The remarkable nesting philopatric behaviour of the Puffinus shearwaters (e.g., –) could have reinforced such differentiation.
The only cyt-b haplotype found in the four sequences obtained from P. olsoni suggests an unexpectedly low genetic diversity within this species, although incomplete sampling cannot be discarded. The fact that the sequences obtained originate from two different locations at the south of Fuerteventura (Figure 1), might provide support to the former hypothesis. However, it is difficult to to establish whether this low diversity is the result of incomplete sampling, of a recent bottleneck previous to its extinction, or of an older historical event. Further studies, that combine analysis of more individuals from more localities, radiocarbon dating on the bones in order to study possible temporal changes, and the sequencing of nuclear markers, will be needed to understand the evolutionary history of P. olsoni.
Alignment of a 484 bp fragment corresponding to the mtDNA cyt‐b gene obtained in the four samples of P. olsoni.
Conceived and designed the experiments: OR CL-F. Performed the experiments: OR CL-F. Analyzed the data: JCI OR. Contributed reagents/materials/analysis tools: CL-F. Wrote the paper: OR JCI JCR JGS JAA CL-F.
- 1. Pimm SL, Russell GJ, Gittleman JL, Brooks TM (1995) The future of biodiversity. Science 269: 347–350.
- 2. Olson SL, James HF (1982) Fossil birds from the Hawaiian Islands: evidence for a wholesale extinction by man before Western contact. Science 217: 633–635.
- 3. Quammen D (1996) The song of the Dodo: Island Biogeography in an Age of Extinctions. London: Pimlico. 704 p.
- 4. Gaskell J (2000) Who killed the great Auk? Oxford: Oxford university Press. 224 p.
- 5. Worthy TH, Holdaway RH (2002) Prehistoric Life of New Zealand. The lost world of the Moa. Indiana: Indiana University Press. 718 p.
- 6. Duncan RP, Blackburn TM, Worthy TH (2002) Prehistoric bird extinctions and human hunting. Proc R Soc Lond B 269: 517–521.
- 7. Steadman D (2006) Extinction & Biogeography of Tropical Pacific Birds. Chicago: University of Chicago Press. 480 p.
- 8. Boyer AG (2008) Extinction patterns in the avifauna of the Hawaiian islands. Divers Distrib 14: 509–517.
- 9. Rando JC, Alcover JA (2008) Evidence for a second western Palaearctic seabird extinction during the last Millennium: the Lava Shearwater Puffinus olsoni. Ibis 150: 188–192.
- 10. Tyrberg T (2009) Holocene avian extinctions. In: Turvey ST, editor. Holocene Extinctions. Oxford: Oxford University Press. pp. 63–106.
- 11. Moum T, Arnason U, Árnason E (2002) Mitochondrial DNA sequence evolution and phylogeny of the Atlantic Alcidae, including the extinct Great Auk (Pinguinus impennis). Mol Biol Evol 19: 1434–1439.
- 12. Scofield RP (2009) Procellariform extinctions in the Holocene: threat processes and wider ecosystem-scale implications. In: Turvey ST, editor. Holocene Extinctions. Oxford: Oxford University Press. pp. 151–166.
- 13. Atkinson IAE (1985) The spread of commensal species of Rattus to oceanic islands and their effects on island avifaunas. In: Moors PJ, editor. Conservation of Island Birds. Cambridge: ICBP Publication 3. pp. 35–81.
- 14. Igual JM, Forerob MG, Gomez T, Oroa D (2007) Can an introduced predator trigger an evolutionary trap in a colonial seabird? Biol Conserv 137: 189–196.
- 15. Rando JC, Alcover JA (2010) On the extinction of dune shearwater (Puffinus holeae) from the Canary Islands. J Ornithol 151: 365–369.
- 16. Warham J (1990) The Petrels. Their ecology and breeding systems. London: Academic press. 440 p.
- 17. Snow DW, Perrins CM (1998) The birds of the Western Palearctic. Vol. 1 Non-Passerines. Oxford: Oxford University Press. 722 p.
- 18. Austin JJ (1996) Molecular Phylogenetics of Puffinus Shearwaters: Preliminary Evidence from Mitochondrial Cytochrome b Gene Sequences. Mol Phylogenet Evol 6(1): 77–88.
- 19. Kennedy M, Page RDM (2002) Seabird supertrees: Combining partial estimates of Procellariiform phylogeny. Auk 119: 88–108.
- 20. Penhallurick J, Wink M (2004) Analysis of the taxonomy and nomenclature of the Procellariiformes based on complete nucleotide sequences of the mitochondrial cytochrome b gene. Emu 104: 125–147.
- 21. Austin JJ, Bretagnolle V, Pasquet E (2004) A global molecular phylogeny of the small Puffinus shearwaters and implications for systematics of the Little-Audubon's shearwater complex. Auk 121: 847–864.
- 22. Emerson BC, Oromí P, Hewitt GM (2000) Colonization and diversification of the species Brachyderes rugatus (Coleoptera) on the Canary Islands: evidence from mitochondrial DNA COII gene sequences. Evolution 54: 911–923.
- 23. Austin JJ, White RWG, Ovenden JR (1994) Population-genetic structure of a philopatric, colonially nesting seabird, the Short-tailed Shearwater (Puffinus tenuirostris). Auk 111: 70–79.
- 24. Louzao M (2006) Conservation biology of the critically endangered Balearic shearwater Puffinus mauretanicus: bridging the gaps between breeding colonies and marine foraging grounds. Tesis Doctoral. Mallorca: Universitat de les Illes Balears.
- 25. Juste J, Genovart M, Oro D, Bertorelle G, Louzao M, et al. (2007) Identidad y estructura genética de la pardela balear (Puffinus mauretanicus). In: Investigación en Parques Nacionales. Proyectos de investigación en Parques Nacionales. 2003-2006. Ministerio de Medio Ambiente 209–222.
- 26. Walker CA, Wragg GM, Harrison CJO (1990) A new shearwater from the Pleistocene of the Canary Islands and its bearing on the evolution of certain Puffinus shearwaters. Historical Biol 3: 203–224.
- 27. McMinn M, Jaume D, Alcover JA (1990) Puffinus olsoni n. sp.: nova espècie de baldritja recentment extinguida provinent de depòsits espeleològics de Fuerteventura i Lanzarote (Illes Canàries, Atlàntic Oriental). Endins 16: 63–71.
- 28. Martín A, Lorenzo JA (2001) Aves del Archipiélago Canario. La Laguna: Lemus Editor. 787 p.
- 29. Michaux J, Hutterer R, López-Martínez N (1991) New fossil faunas from Fuerteventura, Canary Islands: Evidence for a Pleistocene age of endemic rodents and shrews. C R Acad Sci Paris 312: 801–806.
- 30. Rando JC, Perera MA (1994) Primeros datos de ornitofagia entre los aborígenes de Fuerteventura (Islas Canarias). Archaeofauna 3: 13–19.
- 31. Criado C (1991) La evolución del relieve de Fuerteventura. Puerto del Rosario: Cabildo de Fuerteventura. 319 p.
- 32. Lalueza-Fox C, Rompler H, Caramelli D, Staubert C, Catalano G, et al. (2007) A melanocortin 1 receptor allele suggests varying pigmentation among Neanderthals. Science 318: 1453–1455.
- 33. Krause J, Dear PH, Pollack JL, Slatkin M, Spriggs H, et al. (2006) Multiplex amplification of the mammoth mitochondrial genome and the evolution of Elephantidae. Nature 439: 724–727.
- 34. Gómez-Díaz E, González-Solis J, Peinado MA, Page RDM (2006) Phylogeography of the Calonectris shearwaters using molecular and morphometric data. Mol Phylogenet Evol 41: 322–332.
- 35. Heidrich P, Amengual J, Wink M (1998) Phylogenetic relationships in Mediterranean and North Atlantic Puffinus Shearwaters (Aves: Procellariidae) based on nucleotide sequences of mtDNA. Biochem Syst Ecol 26: 145–170.
- 36. Zino F, Brown R, Biscoito M (2008) The separation of Pterodroma madeira (Zino's Petrel) from Pterodroma feae (Fea's Petrel) (Aves: Procellariidae). Ibis 150: 326–334.
- 37. Nunn GB, Stanley SE (1998) Body size effects and rates of cytochrome b evolution in tube-nosed seabirds. Mol Biol Evol 15: 1360–1371.
- 38. Tamura K, Dudley J, Nei M, Kumar S (2007) MEGA4: Molecular Evolutionary Genetics Analysis (MEGA) software version 4.0. Mol Biol Evol 24: 1596–1599.
- 39. Huelsenbeck JP, Ronquist F (2001) MRBAYES: Bayesian inference of phylogeny. Bioinformatics 17: 754–755.
- 40. Ronquist F, Huelsenbeck JP (2003) MRBAYES 3: Bayesian phylogenetic inference under mixed models. Bioinformatics 19: 1572–1574.
- 41. Posada D (2008) jModelTest: Phylogenetic Model Averaging. Mol Biol Evol 25: 1253–1256.
- 42. Rambaut A, Drummond AJ (2007) Tracer v1.4, Available from http://beast.bio.ed.ac.uk/Tracer.
- 43. Ramírez O, Gigli E, Bover P, Alcover JA, Bertranpetit J, et al. (2009) Paleogenomics in a temperate environment: shotgun sequencing from an extinct Mediterranean Caprine. PLoS One 4(5): e5670.
- 44. Hofreiter M, Jaenicke V, Serre D, Haeseler AvA, Pääbo S (2001) DNA sequences from multiple amplifications reveal artifacts induced by cytosine deamination in ancient DNA. Nucleic Acids Res 29(23): 4793–4799.
- 45. Peck DR, Congdon BC (2004) Reconciling historical processes and population structure in the sooty tern Sterna fuscata. J Avian Biol 35: 327–335.
- 46. Smith AL, Monteiro L, Hasegawa O, Friesen VL (2007) Global phylogeography of the band-rumped storm-petrel (Oceanodroma castro; Procellariiformes: Hydrobatidae). Mol Phylogenet Evol 43: 755–773.
- 47. Friesen VL, Smith AL, Gómez-Día E, Bolton M, Furness RW, et al. (2007) Sympatric speciation by allochrony in a seabird. Proc Nat Acad Sci USA 104: 18589–18594.
- 48. Jesús J, Menezes D, Gomes S, Oliveira P, Nogales M, et al. (2009) Phylogenetic relationships of gadfly petrels Pterodroma spp. From the Northeastern Atlantic Ocean: molecular evidence for specific status of Bugio and Cape Verde petrels and implications for conservation. Bird Conserv Int 19: 199–214.
- 49. Nunn GB, Cooper J, Jouventin P, Robertson CJR, Robertson GG (1996) Evolutionary relationships among extant albatrosses(Procellariiformes: Diomedeidae) established from complete cytochrome b gene sequences. Auk 113: 784–801.
- 50. Lee K, Feinstein J, Cracraft J (1997) Phylogenetic relationships of the ratite birds: resolving conflicts between molecular and morphological data sets. In: Mindell DP, editor. New York: Avian molecular evolution and systematics, Academic Press.
|
Last updated on 29 May 2017
In geometry, the circumscribed circle or circumcircle of a polygon is a circle which passes through all the vertices of the polygon. The center of this circle is called the circumcenter and its radius is called the circumradius.
A polygon which has a circumscribed circle is called a cyclic polygon (sometimes a concyclic polygon, because the vertices are concyclic). All regular simple polygons, all isosceles trapezoids, all triangles and all rectangles are cyclic.
A related notion is the one of a minimum bounding circle, which is the smallest circle that completely contains the polygon within it. Not every polygon has a circumscribed circle, as the vertices of a polygon do not need to all lie on a circle, but every polygon has a unique minimum bounding circle, which may be constructed by a linear time algorithm. Even if a polygon has a circumscribed circle, it may not coincide with its minimum bounding circle; for example, for an obtuse triangle, the minimum bounding circle has the longest side as diameter and does not pass through the opposite vertex.
Circumscribed circle, C, and circumcenter, O, of a cyclic polygon, P
All triangles are cyclic; i.e., every triangle has a circumscribed circle.
This can be proven on the grounds that the general equation for a circle with center (a, b) and radius r in the Cartesian coordinate system is
Since this equation has three parameters (a, b, r) only three points' coordinate pairs are required to determine the equation of a circle. Since a triangle is defined by its three vertices, and exactly three points are required to determine a circle, every triangle can be circumscribed.
Straightedge and compass construction
of the circumcircle (red) and the circumcenter Q (red dot)
The circumcenter of a triangle can be constructed by drawing any two of the three perpendicular bisectors. The center is the point where the perpendicular bisectors intersect, and the radius is the length to any of the three vertices.
This is because the circumcenter is equidistant from any pair of the triangle's vertices, and all points on the perpendicular bisectors are equidistant from two of the vertices of the triangle.
Alternate construction of the circumcenter (intersection of broken lines)
An alternate method to determine the circumcenter is to draw any two lines each one departing from one of the vertices at an angle with the common side, the common angle of departure being 90° minus the angle of the opposite vertex. (In the case of the opposite angle being obtuse, drawing a line at a negative angle means going outside the triangle.)
In coastal navigation, a triangle's circumcircle is sometimes used as a way of obtaining a position line using a sextant when no compass is available. The horizontal angle between two landmarks defines the circumcircle upon which the observer lies.
In the Euclidean plane, it is possible to give explicitly an equation of the circumcircle in terms of the Cartesian coordinates of the vertices of the inscribed triangle. Suppose that
are the coordinates of points A, B, and C. The circumcircle is then the locus of points v = (vx,vy) in the Cartesian plane satisfying the equations
guaranteeing that the points A, B, C, and v are all the same distance r from the common center u of the circle. Using the polarization identity, these equations reduce to the condition that the matrix
has a nonzero kernel. Thus the circumcircle may alternatively be described as the locus of zeros of the determinant of this matrix:
Using cofactor expansion, let
we then have a|v|2 − 2Sv − b = 0 and, assuming the three points were not in a line (otherwise the circumcircle is that line that can also be seen as a generalized circle with S at infinity), |v − S/a|2 = b/a + |S|2/a2, giving the circumcenter S/a and the circumradius √(b/a + |S|2/a2). A similar approach allows one to deduce the equation of the circumsphere of a tetrahedron.
A unit vector perpendicular to the plane containing the circle is given by
Hence, given the radius, r, center, Pc, a point on the circle, P0 and a unit normal of the plane containing the circle, , one parametric equation of the circle starting from the point P0 and proceeding in a positively oriented (i.e., right-handed) sense about is the following:
Trilinear and barycentric coordinates
An equation for the circumcircle in trilinear coordinates x : y : z is:p. 199 a/x + b/y + c/z = 0. An equation for the circumcircle in barycentric coordinates x : y : z is a2/x + b2/y + c2/z = 0.
The isogonal conjugate of the circumcircle is the line at infinity, given in trilinear coordinates by ax + by + cz = 0 and in barycentric coordinates by x + y + z = 0.
Additionally, the circumcircle of a triangle embedded in d dimensions can be found using a generalized method. Let A, B, and C be d-dimensional points, which form the vertices of a triangle. We start by transposing the system to place C at the origin:
The circumradius, r, is then
where θ is the interior angle between a and b. The circumcenter, p0, is given by
This formula only works in three dimensions as the cross product is not defined in other dimensions, but it can be generalized to the other dimensions by replacing the cross products with following identities:
The Cartesian coordinates of the circumcenter are
Without loss of generality this can be expressed in a simplified form after translation of the vertex A to the origin of the Cartesian coordinate systems, i.e., when A′ = A − A = (A′x,A′y) = (0,0). In this case, the coordinates of the vertices B′ = B − A and C′ = C − A represent the vectors from vertex A′ to these vertices. Observe that this trivial translation is possible for all triangles and the circumcenter of the triangle A′B′C′ follow as
Due to the translation of vertex A to the origin, the circumradius r can be computed as
and the actual circumcenter of ABC follows as
The circumcenter has trilinear coordinates:p.19
- cos α : cos β : cos γ
where α, β, γ are the angles of the triangle.
In terms of the side lengths a, b, c, the trilinears are
The circumcenter has barycentric coordinates
where a, b, c are edge lengths (BC, CA, AB respectively) of the triangle.
In terms of the triangle's angles the barycentric coordinates of the circumcenter are
Since the Cartesian coordinates of any point are a weighted average of those of the vertices, with the weights being the point's barycentric coordinates normalized to sum to unity, the circumcenter vector can be written as
Here U is the vector of the circumcenter and A, B, C are the vertex vectors. The divisor here equals 16S 2 where S is the area of the triangle.
Cartesian coordinates from cross- and dot-products
In Euclidean space, there is a unique circle passing through any given three non-collinear points P1, P2, and P3. Using Cartesian coordinates to represent these points as spatial vectors, it is possible to use the dot product and cross product to calculate the radius and center of the circle. Let
Then the radius of the circle is given by
The center of the circle is given by the linear combination
Location relative to the triangle
The circumcenter's position depends on the type of triangle:
- If and only if a triangle is acute (all angles smaller than a right angle), the circumcenter lies inside the triangle.
- If and only if it is obtuse (has one angle bigger than a right angle), the circumcenter lies outside the triangle.
- If and only if it is a right triangle, the circumcenter lies at the center of the hypotenuse. This is one form of Thales' theorem.
The circumcenter of an acute triangle is inside the triangle
The circumcenter of a right triangle is at the center of the hypotenuse
The circumcenter of an obtuse triangle is outside the triangle
These locational features can be seen by considering the trilinear or barycentric coordinates given above for the circumcenter: all three coordinates are positive for any interior point, at least one coordinate is negative for any exterior point, and one coordinate is zero and two are positive for a non-vertex point on a side of the triangle.
The angles which the circumscribed circle forms with the sides of the triangle coincide with angles at which sides meet each other. The side opposite angle α meets the circle twice: once at each end; in each case at angle α (similarly for the other two angles). This is due to the alternate segment theorem, which states that the angle between the tangent and chord equals the angle in the alternate segment.
Triangle centers on the circumcircle of triangle ABC
In this section, the vertex angles are labeled A, B, C and all coordinates are trilinear coordinates:
- Steiner point = bc / (b2 − c2) : ca / (c2 − a2) : ab / (a2 − b2) = the nonvertex point of intersection of the circumcircle with the Steiner ellipse. (The Steiner ellipse, with center = centroid(ABC), is the ellipse of least area that passes through A, B, and C. An equation for this ellipse is 1/(ax) + 1/(by) + 1/(cz) = 0.)
- Tarry point = sec (A + ω) : sec (B + ω) : sec (C + ω) = antipode of the Steiner point
- Focus of the Kiepert parabola = csc (B − C) : csc (C − A) : csc (A − B).
The diameter of the circumcircle, called the circumdiameter and equal to twice the circumradius, can be computed as the length of any side of the triangle divided by the sine of the opposite angle:
As a consequence of the law of sines, it does not matter which side and opposite angle are taken: the result will be the same.
The diameter of the circumcircle can also be expressed as
where a, b, c are the lengths of the sides of the triangle and s = (a + b + c)/2 is the semiperimeter. The expression above is the area of the triangle, by Heron's formula. Trigometric expressions for the diameter of the circumcircle include:p.379
The triangle's nine-point circle has half the diameter of the circumcircle.
In any given triangle, the circumcenter is always collinear with the centroid and orthocenter. The line that passes through all of them is known as the Euler line.
The isogonal conjugate of the circumcenter is the orthocenter.
The useful minimum bounding circle of three points is defined either by the circumcircle (where three points are on the minimum bounding circle) or by the two points of the longest side of the triangle (where the two points define a diameter of the circle). It is common to confuse the minimum bounding circle with the circumcircle.
The circumcircle of three collinear points is the line on which the three points lie, often referred to as a circle of infinite radius. Nearly collinear points often lead to numerical instability in computation of the circumcircle.
Circumcircles of triangles have an intimate relationship with the Delaunay triangulation of a set of points.
By Euler's theorem in geometry, the distance between the circumcenter O and the incenter I is
where r is the incircle radius and R is the circumcircle radius; hence the circumradius is at least twice the inradius (Euler's triangle inequality), with equality only in the equilateral case.:p. 198
The distance between O and the orthocenter H is:p. 449
For centroid G and nine-point center N we have
The product of the incircle radius and the circumcircle radius of a triangle with sides a, b, and c is: p. 189, #298(d)
With circumradius R, sides a, b, c, and medians ma, mb, and mc, we have:p.289–290
If median m, altitude h, and internal bisector t all emanate from the same vertex of a triangle with circumradius R, then:p.122,#96
Carnot's theorem states that the sum of the distances from the circumcenter to the three sides equals the sum of the circumradius and the inradius.:p.83 Here a segment's length is considered to be negative if and only if the segment lies entirely outside the triangle.
If a triangle has two particular circles as its circumcircle and incircle, there exist an infinite number of other triangles with the same circumcircle and incircle, with any point on the circumcircle as a vertex. (This is the n=3 case of Poncelet's porism). A necessary and sufficient condition for such triangles to exist is the above equality :p. 188
Quadrilaterals that can be circumscribed have particular properties including the fact that opposite angles are supplementary angles (adding up to 180° or π radians).
For a cyclic polygon with an odd number of sides, all angles are equal if and only if the polygon is regular. A cyclic polygon with an even number of sides has all angles equal if and only if the alternate sides are equal (that is, sides 1, 3, 5, ... are equal, and sides 2, 4, 6, ... are equal).
A cyclic pentagon with rational sides and area is known as a Robbins pentagon; in all known cases, its diagonals also have rational lengths.
In any cyclic n-gon with even n, the sum of one set of alternate angles (the first, third, fifth, etc.) equals the sum of the other set of alternate angles. This can be proven by induction from the n=4 case, in each case replacing a side with three more sides and noting that these three new sides together with the old side form a quadrilateral which itself has this property; the alternate angles of the latter quadrilateral represent the additions to the alternate angle sums of the previous n-gon.
Let one n-gon be inscribed in a circle, and let another n-gon be tangential to that circle at the vertices of the first n-gon. Then from any point P on the circle, the product of the perpendicular distances from P to the sides of the first n-gon equals the product of the perpendicular distances from P to the sides of the second n-gon.:p. 72
Point on the circumcircle
Let a cyclic n-gon have vertices A1 , ..., An on the unit circle. Then for any point M on the minor arc A1An, the distances from M to the vertices satisfy:p.190,#332.10
Polygon circumscribing constant
A sequence of circumscribed polygons and circles.
Any regular polygon is cyclic. Consider a unit circle, then circumscribe a regular triangle such that each side touches the circle. Circumscribe a circle, then circumscribe a square. Again circumscribe a circle, then circumscribe a regular 5-gon, and so on. The radii of the circumscribed circles converge to the so-called polygon circumscribing constant
(sequence A051762 in the OEIS). The reciprocal of this constant is the Kepler–Bouwkamp constant.
- ^ a b Whitworth, William Allen. Trilinear Coordinates and Other Methods of Modern Analytical Geometry of Two Dimensions, Forgotten Books, 2012 (orig. Deighton, Bell, and Co., 1866). http://www.forgottenbooks.com/search?q=Trilinear+coordinates&t=books
- ^ a b Clark Kimberling's Encyclopedia of Triangles "Archived copy". Archived from the original on 2012-04-19. Retrieved 2012-06-02.
- ^ Wolfram page on barycentric coordinates
- ^ Dörrie, Heinrich, 100 Great Problems of Elementary Mathematics, Dover, 1965.
- ^ Nelson, Roger, "Euler's triangle inequality via proof without words," Mathematics Magazine 81(1), February 2008, 58-61.
- ^ Dragutin Svrtan and Darko Veljan, "Non-Euclidean versions of some classical triangle inequalities", Forum Geometricorum 12 (2012), 197–209. http://forumgeom.fau.edu/FG2012volume12/FG201217index.html
- ^ Marie-Nicole Gras, "Distances between the circumcenter of the extouch triangle and the classical centers", Forum Geometricorum 14 (2014), 51-61. http://forumgeom.fau.edu/FG2014volume14/FG201405index.html
- ^ Smith, Geoff, and Leversha, Gerry, "Euler and triangle geometry", Mathematical Gazette 91, November 2007, 436–452.
- ^ a b c Johnson, Roger A., Advanced Euclidean Geometry, Dover, 2007 (orig. 1929).
- ^ Posamentier, Alfred S., and Lehmann, Ingmar. The Secrets of Triangles, Prometheus Books, 2012.
- ^ a b Altshiller-Court, Nathan, College Geometry, Dover, 2007.
- ^ De Villiers, Michael. "Equiangular cyclic and equilateral circumscribed polygons," Mathematical Gazette 95, March 2011, 102-107.
- ^ Buchholz, Ralph H.; MacDougall, James A. (2008), "Cyclic polygons with rational sides and area", Journal of Number Theory, 128 (1): 17–48, MR 2382768, doi:10.1016/j.jnt.2007.05.005.
- ^ Inequalities proposed in “Crux Mathematicorum”, .
- Coxeter, H.S.M. (1969). "Chapter 1". Introduction to geometry. Wiley. pp. 12–13. ISBN 0-471-50458-0.
- Megiddo, N. (1983). "Linear-time algorithms for linear programming in R3 and related problems". SIAM Journal on Computing. 12 (4): 759–776. doi:10.1137/0212052.
- Kimberling, Clark (1998). "Triangle centers and central triangles". Congressus Numerantium. 129: i–xxv, 1–295.
- Pedoe, Dan (1988) . Geometry: a comprehensive course. Dover. ISBN 0-486-65812-0.
Content from Wikipedia
|
The foundation of both sociological and social thinking is formed by Karl Marx. He gave us the insight to see patterns of conflicts evolving and revolving around systems of inequality. Other theorists, like Weber, have argued against Marx’s ideologies, while others have used his ideas and thoughts in a unique way to understand society and the way inequality works. Max’s theory falls under the idea of enlightenment, positivism, and progress. Marx argues that societies change and history pushes on in response to some economic forces. He sees society functioning like a machine. He holds that scientific approach is the path to true knowledge that would liberate the oppressed in the society. For Marx, major historical changes happen because of the class struggle. He referred to the transition between the medieval and capitalist economy. Marx gives us a “spiritual existentialism in secular language” (Fromm, 1961. p. 5). To think like Marx, is, therefore, being critical of the inhumanities that human beings face. “Marx’s philosophy is one of protests; it is a protest imbued with faith in man, in his capacity to liberate himself and to realize his potentialities” (Fromm, 1961, p. vi).
The elements of capitalism, which Marx talks about, include class and class structure, value and exploration, industrialization, markets and commodification. According to Marx, all these elements provide the engine of historical change and eventually lead to the termination of capitalism.
Class and class structure: Marx views the dynamic behind history as a process of production. He is concerned with three issues under production: the actual process of production, the social relationship that form because of production, and the outcomes of production. For Marx, human history is a history of class struggle. Capitalism lifts economic work of all institutional forms. Under capitalism, the relationships people have with others are seen to be different from familial, religious, or political relations. In agricultural-based societies, for example, family and work coincided. All family members performed their farm work together. Capitalism lifted this work from farm to urban societies. It disembodied work from family and social relations. This movement has created dual spheres of home and work, each controlled by a specific gender. This led to men controlling their women’s entire life. Women, therefore, became property of man. Marx and Engels (1884/1978a) conclude in their findings that, “The modern family contains in embryo not only slavery….It contains within itself miniature all the antagonisms, which later develop on a wide scale within society and its state” (p. 737).
Limited time Offer
Marx then examines the second class under capitalism, which is bipolarization. It consists of the bourgeoisie, which is the class of small land and business owners and the lumpen proletariat which is the underclass. He argues that because the lumpenproletariat have no relationship to economic production, they will become less important in the dynamics of capitalism. The bourgeoisie will also shrink in number, because they are bought or pushed aside by the powerful capitalists. While most people see their business size as the result of hard work and competition, Marx sees it as a result of structural dialectic processes. Capitalists usually reinvest profit to make more profit. As they reinvest capital, the demand for labor goes up; this causes the number of the unemployed to shrink, leading to increase in wages. This in tern causes the profits to go down and capitalists find an opportunity to cut their production, creating a crisis in the economy. The crisis causes more workers to be laid off and small businesses to collapse. The small businesses are bought out and the once small-scale capitalists also start working for the larger capitalists. The process repeats over time leaving the capital to be centralized to fewer and fewer hands. Marx pointed that it will be easy to take over power from fewer capitalists, when the revolution comes. Finally, the gap between the workers and the owners widens leading to the reduction of conflict between the two parties.
Value and exploration: Marx compares and contrasts this theory with the political economists of those days. For the political economists, commodities, value, profit, private property, and the division of labor were seen as natural effects of social evolution. Marx, however, sees all these as instruments of oppression that affects people’s life chances. This is because people paid more for commodities more than its worth. Adam Smith (1776) came up with ways of measuring commodities. He argues that every commodity has two kinds of values: use-value and exchange-value. Use-value is the actual function value that a product contains, while exchange-value is the rate of exchange a commodity bears, when compared to other commodities. One could exchange a commodity that has a use-value with one that does not; but they both have exchange-value. What can be allowing these two commodities to be changed? Smith argues, and Marx agrees that the common denominator for all these is human labor: “Labor, therefore, is the real measure of the exchangeable value of all commodities” (Smith, 1776/1937, p.30). Marx equates labor with money, but maintains their distinction. He argues that the difference between use-value and exchange-value is that we pay more for a product than its use-value would indicate. Marx argues that capitalists are pushed to increase their profits and the level of surplus labor and, therefore, the rate of exploitation.
Benefit from Our Service: Save 25% Along with the first order offer - 15% discount, you save extra 10% since we provide 300 words/page instead of 275 words/page
Industrialization, markets and commodification: Industrialization is the process through which work moves from being performed by humans to being performed by machines. It increases the level of production and market expansion. The aim of capitalists is to gain more profits. They, therefore, expand their markets in any direction. Such markets are always inherently susceptible to expansion, because they are always driven by the capitalists’ interest to gain more profits. Marx argues that as the market becomes more important in the society, and the use of money becomes more universal, money becomes the determining factor of all human relations. This expansive nature of the market is known as commodification. It describes how human life can be changed into something that can be sold and Marx sees this to be expanding at an ever-increasing rate. The drive for expanding profits and the endless potential for commodification is one of the main factors that have contributed to globalization.
by Top 30 writers from - $10.95
VIP Support from - $9.99
Proofread by editor from - $4.33
extended REVISION from - $2.00
SMS NOTIFICATIONS from - $3.00
PDF plagiarism report from - $5.99
PACKAGE from - $29.01
Marx argues that societies change and history pushes on in response to some economic forces. These economic forces are brought by capitalists who want to get more and more capital while controlling the less fortunate.
Related Consideration essays
|
We were all taught as children that there are 5 senses: sight, taste, sound, smell, and touch. The initial four senses utilize clear, distinct organs, such as the eyes, taste buds, ears, and nose, but just how does the body sense touch exactly? Touch is experienced over the entire body, both inside and outside. There is not one distinct organ that is responsible for sensing touch. Rather, there are tiny receptors, or nerve endings, around the entire body which sense touch where it occurs and sends signals to the brain with information regarding the type of touch that occurred. As a taste bud on the tongue detects flavor, mechanoreceptors are glands within the skin and on other organs that detect sensations of touch. They’re known as mechanoreceptors because they’re designed to detect mechanical sensations or differences in pressure.
Role of Mechanoreceptors
A person understands that they have experienced a sensation once the organ responsible for discovering that specific sense sends a message to the brain, which is the primary organ that processes and arranges all of the information. Messages are sent from all areas of the body to the brain through wires referred to as neurons. There are thousands of small neurons that branch out to all areas of the human body, and on the endings of many of these neurons are mechanoreceptors. To demonstrate what happens when you touch an object, we will use an example.
Envision a mosquito lands on your arm. The strain of this insect, so light, stimulates mechanoreceptors in that particular area of the arm. Those mechanoreceptors send a message along the neuron they are connected to. The neuron connects all the way to the brain, which receives the message that something is touching your body in the exact location of the specific mechanoreceptor that sent the message. The brain will act with this advice. Maybe it will tell the eyes to look at the region of the arm that detected the signature. And when the eyes tell the brain that there’s a mosquito on the arm, the brain may tell the hand to quickly flick it away. That’s how mechanoreceptors work. The purpose of the article below is to demonstrate as well as discuss in detail the functional organization and molecular determinants of mechanoreceptors.
Touch Sense: Functional Organization and Molecular Determinants of Mechanosensitive Receptors
Cutaneous mechanoreceptors are localized in the various layers of the skin where they detect a wide range of mechanical stimuli, including light brush, stretch, vibration and noxious pressure. This variety of stimuli is matched by a diverse array of specialized mechanoreceptors that respond to cutaneous deformation in a specific way and relay these stimuli to higher brain structures. Studies across mechanoreceptors and genetically tractable sensory nerve endings are beginning to uncover touch sensation mechanisms. Work in this field has provided researchers with a more thorough understanding of the circuit organization underlying the perception of touch. Novel ion channels have emerged as candidates for transduction molecules and properties of mechanically gated currents improved our understanding of the mechanisms of adaptation to tactile stimuli. This review highlights the progress made in characterizing functional properties of mechanoreceptors in hairy and glabrous skin and ion channels that detect mechanical inputs and shape mechanoreceptor adaptation.
Keywords: mechanoreceptor, mechanosensitive channel, pain, skin, somatosensory system, touch
Touch is the detection of mechanical stimulus impacting the skin, including innocuous and noxious mechanical stimuli. It is an essential sense for the survival and the development of mammals and human. Contact of solid objects and fluids with the skin gives necessary information to the central nervous system that allows exploration and recognition of the environment and initiates locomotion or planned hand movement. Touch is also very important for apprenticeship, social contacts and sexuality. Sense of touch is the least vulnerable sense, although it can be distorted (hyperesthesia, hypoesthesia) in many pathological conditions.1-3
Touch responses involve a very precise coding of mechanical information. Cutaneous mechanoreceptors are localized in the various layers of the skin where they detect a wide range of mechanical stimuli, including light brush, stretch, vibration, deflection of hair and noxious pressure. This variety of stimuli is matched by a diverse array of specialized mechanoreceptors that respond to cutaneous deformation in a specific way and relay these stimuli to higher brain structures. Somatosensory neurones of the skin fall into two groups: low-threshold mechanoreceptors (LTMRs) that react to benign pressure and high-threshold mechanoreceptors (HTMRs) that respond to harmful mechanical stimulation. LTMR and HTMR cell bodies reside within dorsal root ganglia (DRG) and cranial sensory ganglia (trigeminal ganglia). Nerve fibers associated with LTMRs and HTMRs are classified as Aβ-, Aδ- or C-fibers based on their action potential conduction velocities. C fibers are unmyelinated and have the slowest conduction velocities (~2 m/s), whereas Aδ and Aβ fibers are lightly and heavily myelinated, exhibiting intermediate (~12 m/s) and rapid (~20 m/s) conduction velocities, respectively. LTMRs are also classified as slowly, or rapidly adapting responses (SA- and RA-LTMRs) according to their rates of adaptation to sustained mechanical stimulus. They are further distinguished by the cutaneous end organs they innervate and their preferred stimuli.
Ability of mechanoreceptors to detect mechanical cues relies on the presence of mechanotransducer ion channels that rapidly transform mechanical forces into electrical signals and depolarise the receptive field. This local depolarisation, called receptor potential, can generate action potentials that propagate toward the central nervous system. However, properties of molecules that mediate mechanotransduction and adaptation to mechanical forces remain unclear.
In this review, we provide an overview of mammalian mechanoreceptor properties in innocuous and noxious touch in the hairy and glabrous skin. We also consider the recent knowledge about the properties of mechanically-gated currents in an attempt to explain the mechanism of mechanoreceptor’s adaptation. Finally, we review recent progress made in identifying ion channels and associated proteins responsible for the generation of mechano-gated currents.
Hair Follicle-Associated LTMRs
The hair follicles represent hair shaft-producing mini-organs that detect light touch. Fibers associated with hair follicles respond to hair motion and its direction by firing trains of action potentials at the onset and removal of the stimulus. They are rapidly adapting receptors.
Cat and rabbit. In cat and rabbit coat, hair follicles can be divided in three hair follicle types, the Down hair, the Guard hair and the Tylotrichs. The Down hairs (underhair, wool, vellus)4 are the most numerous, the shortest and finest hairs of the coat. They are wavy, colorless and emerged in groups of two to four hairs from a common orifice in the skin. The Guard hairs (monotrichs, overhears, tophair)4 are slightly curved, either pigmented or unpigmented, and emerged singly from the mouths of their follicles. The tylotrichs are the least numerous, the longest and thickest hairs.5,6 They are pigmented or unpigmented, sometimes both and emerged singly from a follicle which is surrounded by a loop of capillary blood vessels. The sensory fibers supply to a hair follicle is located below the sebaceous gland and are attributed to Aβ or Aδ-LTMR fibers.7
In close apposition to the down hair shaft, just below the level of the sebaceous gland is the ring of lanceolate pilo-Ruffini endings. These sensory nerve endings are positioned in a spiral course around the hair shaft within the connective tissue forming the hair follicle. Within the hair follicle, there are also free nerve endings, some of them forming mechanoreceptors. Frequently, touch corpuscles (see glabrous skin) are surrounding the neck region of tylotrich follicle.
Properties of myelinated nerve endings in cat and rabbit hairy skin have been explored intensively in the 1930–1970 period (review in Hamann, 1995).8 Remarkably, Brown and Iggo, studying 772 units with myelinated afferent nerve fibers in the saphenous nerves from cat and rabbit, have classified responses in three receptor types corresponding to the movements of Down hairs (type D receptors), Guard hair (type G receptors) and Tylotrich hair (type T receptor).9 All the afferent nerve fiber responses have been brought together in the Rapidly Adapting receptor of type I (RA I) by opposition to the Pacinian receptor named RA II. RA I mechanoreceptors detect velocity of mechanical stimulus and have sharp border. They do not detect thermal variations. Burgess et al. also described a rapidly adapting field receptor that responds optimally to stroking of the skin or movement of several hairs, which was attributed to stimulation of pilo-Ruffini endings. None of the hair follicle response was attributed to C fiber activity.10
Mice. In the dorsal hairy skin of mice, three major types of hair follicles have been described: zigzag (around 72%), awl/auchene (around 23%) and guard or tylotrich (around 5%).11-14 Zigzag and Awl/auchenne hair follicles produce the thinner and shorter hair shafts and are associated with one sebaceous gland. Guard or tylotrich hairs are the longest of the hair follicle types. They are characterized by a large hair bulb associated with two sebaceous glands. Guard and awl/auchene hairs are arranged in an iterative, regularly spaced pattern whereas zigzag hairs densely populate skin areas surrounding the two larger hair follicle types [Fig. 1 (A1, A2 and A3)].
Recently, Ginty and collaborators used a combination of molecular-genetic labeling and somatotopic retrograde tracing approaches to visualize the organization of peripheral and central axonal endings of the LTMRs in mice.15 Their findings support a model in which individual features of a complex tactile stimulus are extracted by the three hair follicle types and conveyed via the activities of unique combinations of Aβ-, Aδ- and C- fibers to dorsal horn.
They showed that the genetic labeling of tyrosine hydroxylase positive (TH+) DRG neurones characterize a population of nonpeptidergic, small-diameter sensory neurones and allow for visualization of C-LTMR peripheral endings in the skin. Surprisingly, the axoneal branches of individual C-LTMRs were found to arborise and form longitudinal lanceolate endings that are intimately associated with zigzag (80% of endings) and awl/auchene (20% of endings), but not tylotrich hair follicles [Fig. 1 (A4)]. Longitudinal lanceolate endings have been long thought to belong exclusively to Aβ-LTMRs and therefore it was unexpected that the endings of C-LTMRs would form longitudinal lanceolate endings.15 These C-LTMRs have an intermediate adaptation in comparison with the slowly and rapidly adapting myelinated mechanoreceptors [Fig. 2 (C1)].
A second major population identified concerns the Aδ-LTMR endings in Awl/Auchenne and zigzag follicles to be compared with the Down hair follicle extensively studied in cat and rabbit. Ginty and collaborators showed that TrkB is expressed at high levels in a subset of medium-diametre DRG neurones. Intracellular recordings using the ex vivo skin-nerve preparation of labeled fibers revealed that they exhibit the physiological properties of fibers previously studied in cat and rabbit: exquisite mechanical sensitivity (Von Frey threshold < 0.07 mN), rapidly adapting responses to suprathreshold stimuli, intermediate conduction velocities (5.8 ± 0.9 m/s) and narrow uninflected soma spikes.15 These Aδ-LTMRs form longitudinal lanceolate endings associated with virtually every zigzag and awl/auchene hair follicle of the trunk [Fig. 1 (A5)].
Finally, they showed that the peripheral endings of rapidly adapting Aβ LTMRs form longitudinal lanceolate endings associated with guard (or tylotrich) and awl/auchene hair follicles [Fig. 1 (A6)].15 In addition, Guard hairs are also associated with a Merkel cell complex forming a touch dome connected to Aβ slowly adapting LTMR [Fig. 1 (A7)].
In summary, virtually all zigzag hair follicles are innervated by both C-LTMR and Aδ-LTMR lanceolate endings; awl/auchene hairs are triply innervated by Aβ rapidly adapting-LTMR, Aδ-LTMR and C-LTMR lanceolate endings; Guard hair follicles are innervated by Aβ rapidly adapting-LTMR longitudinal lanceolate endings and interact with Aβ slowly adapting-LTMR of touch dome endings. Thus, each mouse hair follicle receives unique and invariant combinations of LTMR endings corresponding to neurophysiologically distinct mechanosensory end organs. Considering the iterative arrangement of these three hair types, Ginty and collaborators propose that hairy skin consists of iterative repeat of peripheral unit containing, (1) one or two centrally located guard hairs, (2) ~20 surrounding awl/auchenne hairs and (3) ~80 interspersed zigzag hairs [Fig. 2 (C1)].
Spinal cord projection. The central projections of Aβ rapidly adapting-LTMRs, Aδ-LTMRs and C-LTMRs terminate in distinct, but partially overlapping laminae (II, III, IV) of the spinal cord dorsal horn. In addition, the central terminals of LTMRs that innervate the same or adjacent hair follicles within a peripheral LTMR unit are aligned to form a narrow LTMR column in the spinal cord dorsal horn [Fig. 1 (B1)]. Thus, it appears likely that a wedge, or column of somatotopically organized primary sensory afferent endings in the dorsal horn represents the alignment of the central projections of Aβ-, Aδ- and C-LTMRs that innervate the same peripheral unit and detect mechanical stimuli acting upon the same small group of hairs follicles. Based on the numbers of guard, awl/ auchene and zigzag hairs of the trunk and limbs and the numbers of each LTMR subtype, Ginty and collaborators estimate that the mouse dorsal horn contains 2,000–4,000 LTMR columns, which corresponds to the approximate number of peripheral LTMR units.15
Furthermore, axones of LTMR subtypes are closely associated with one another, having entwined projections and interdigitated lanceolate endings that innervate the same hair follicle. In addition, because the three hair follicle types exhibit different shapes, sizes and cellular compositions, they are likely to have distinct deflectional or vibrational tuning properties. These findings are consistent with classic neurophysiological measurements in the cat and rabbit indicating that Aβ RA-LTMRs and Aδ-LTMRs can be differentially activated by deflection of distinct hair follicle types.16,17
In conclusion, touch in hairy skin is the combination of: (1) the relative numbers, unique spatial distributions and distinct morphological and deflectional properties of the three types of hair follicles; (2) the unique combinations of LTMR subtype endings associated with each of the three hair follicle types; and (3) distinct sensitivities, conduction velocities, spike train patterns and adaptation properties of the four main classes of hair-follicle-associated LTMRs that enable the hairy skin mechanosensory system to extract and convey to the CNS the complex combinations of qualities that define a touch.
Free-Nerve Endings LTMRs
Generally, C-fibers free endings in the skin are HTMRs, but a subpopulation of C-fibers doesn’t respond to noxious touch. This subset of tactile C-fiber (CT) afferents represents a distinct type of unmyelinated, low-threshold mechanoreceptive units existing in the hairy but not glabrous skin of humans and mammals [Fig. 1 (A8)].18,19 CTs are generally associated with the perception of pleasant tactile stimulation in body contact.20,21
CT afferents respond to indentation forces in the range 0.3–2.5 mN and are thus as sensitive to skin deformation as many of the Aβ afferents.19 The adaptation characteristics of CT afferents are thus intermediate in comparison with the slowly and rapidly adapting myelinated mechanoreceptors. The receptive fields of human CT afferents are roughly round or oval in shape. The field consists of one to nine small responsive spots distributed over an area up to 35 mm2.22 The mouse homolog receptors are organized in a pattern of discontinuous patches covering about 50–60% of the area in the hairy skin [Fig. 2 (C2)].23
Evidence from patients lacking myelinated tactile afferents indicates that signaling in CT fibers activate the insular cortex. Since this system is poor in encoding discriminative aspects of touch, but well-suited to encoding slow, gentle touch, CT fibers in hairy skin may be part of a system for processing pleasant and socially relevant aspects of touch.24 CT fiber activation may also have a role in pain inhibition and it has recently been proposed that inflammation or trauma may change the sensation conveyed by C-fiber LTMRs from pleasant touch to pain.25,26
Which pathway CT-afferents travel is not yet known [Fig. 1 (B2)], but low-threshold tactile inputs to spinothalamic projection cells have been documented,27 lending credence to reports of subtle, contralateral deficits of touch detection in human patients following destruction of these pathways after chordotomy procedures.28
LTMRs in Glabrous Skin
Merkel cell-neurite complexes and touch dome. Merkel (1875) was the first to give a histological description of clusters of epidermal cells with large lobulated nuclei, making contact with presumed afferent nerve fibers. He assumed that they subserved sense of touch by calling them Tastzellen (tactile cells). In humans, Merkel cell–neurite complexes are enriched in touch sensitive areas of the skin, they are found in the basal layer of the epidermis in fingers, lips and genitals. They also exist in hairy skin at lower density. The Merkel cell–neurite complex consists of a Merkel cell in close apposition to an enlarged nerve terminal from a single myelinated Aβ fiber [Fig. 1 (C1)] (review in Halata and collaborators).29 At the epidermal side Merkel cell exhibits finger-like processes extending between neighboring keratinocytes [Fig. 1 (C2)]. Merkel cells are keratinocyte-derived epidermal cells.30,31 The term of touch dome was introduced to name the large concentration of Merkel cell complexes in the hairy skin of cat forepaw. A touch dome could have up to 150 Merkel cells innervated by a single Aβ-fiber and in humans besides Aβ-fibers, Aδ and C-fibers were also regularly present.32-34
Stimulation of Merkel cell–neurite complexes results in slowly-adapting Type I (SA I) responses, which originate from punctuate receptive fields with sharp borders. There is no spontaneous discharge. These complexes respond to indentation depth of the skin and have the highest spatial resolution (0.5 mm) of the cutaneous mechanoreceptors. They transmit a precise spatial image of tactile stimuli and are proposed to be responsible for shape and texture discrimination [Fig. 2 (B1)]. Mice devoid of Merkel cells cannot detect textured surfaces with their feet while they do so using their whiskers.35
Whether the Merkel cell, the sensory neuron or both are sites of mechanotransduction is still a matter of debate. In rats, phototoxic destruction of Merkel cells abolishes SA I response.36 In mice with genetically suppressed-Merkel cells, the SA I response recorded in ex vivo skin/nerve preparation completely disappeared, demonstrating that Merkel cells are required for the proper encoding of Merkel receptor responses.37 However, the mechanical stimulation of isolated Merkel cells in culture by motor driven pressure does not generate mechanically-gated currents.38,39 Keratinocytes may play an important role in the normal functioning of the Merkel cell–neurite complex. The Merkel cell finger-like processes can move with skin deformation and epidermis cell movement, and this may be the first step of mechanical transduction. Clearly, the conditions required to study mechano-sensitivity of Merkel cells have yet to be established.
Ruffini endings. Ruffini endings are thin cigar-shaped encapsulated sensory endings connected to Aβ nerve endings. Ruffini endings are small connective tissue cylinders arranged along dermal collagen strands which are supplied by one to three myelinated nerve fibers of 4–6 µm diametre. Up to three cylinders of different orientation in the dermis may merge to form one receptor [Fig. 1 (C3)]. Structurally, Ruffini endings are similar to Golgi tendon organs. They are broadly expressed in the dermis and have been identified as the slowly adapting type II (SA II) cutaneous mechanoreceptors. Against the background of spontaneous nervous activity, a slowly-adapting regular discharge is elicited by perpendicular low force maintained mechanical stimulation or more effectively by dermal stretch. SA II response originates from large receptive fields with obscure borders. Ruffini receptors contribute to the perception of the direction of object motion through the pattern of skin stretch [Fig. 2 (A2)].
In mice, SA I and SA II responses can be separated electrophysiologically in ex-vivo nerve-skin preparation.40 Nandasena and collaborators reported the immunolocalization of aquaporin 1 (AQP1) in the periodontal Ruffini endings of the rat incisors suggesting that AQP1 is involved in the maintenance of the dental osmotic balance necessary for the mechanotransduction.41 The periodontal Ruffini endings also expressed the putative mechanosensitive ion channel ASIC3.42
Meissner corpuscles. Meissner corpuscles are localized in the dermal papillae of the glabrous skin, mainly in hand palms and foot soles but also in lips, in tongue, in face, in nipples and in genitals. Anatomically, they consist of an encapsulated nerve ending, the capsule being made of flattened supportive cells arranged as horizontal lamellae embedded in connective tissue. There is one single nerve fiber Aβ afferents connected per corpuscle [Fig. 1 (C4)]. Any physical deformation of the corpuscle triggers a volley of action potentials that quickly ceases, i.e., they are rapidly adapting receptors. When the stimulus is removed, the corpuscle regains its shape and while doing so produces another volley of action potentials. Due to their superficial location in the dermis, these corpuscles selectively respond to skin motion, tactile detection of slip and vibrations (20–40 Hz). They are sensitive to dynamic skin – for example, between the skin and an object that is being handled [Fig. 2 (A1)].
Pacinian corpuscles. Pacinian corpuscles are the deeper mechanoreceptors of the skin and are the most sensitive encapsulated cutaneous mechanoreceptor of skin motion. These large ovoid corpuscles (1 mm in length) made of concentric lamellae of fibrous connective tissue and fibroblasts lined by flat modified Schwann cells are expressed in the deep dermis.43 In the center of the corpuscle, in a fluid-filled cavity called inner bulb, terminates one single Aβ afferent unmyelinated nerve ending [Fig. 1 (C5)]. They have a large receptive field on the skin’s surface with a particularly sensitive center. The development and function of several rapidly adapting mechanoreceptor types are disrupted in c-Maf mutant mice. In particular, Pacinian corpuscles are severely atrophied.44
Pacinian corpuscles display very rapid adaptation in response to the indentation of the skin, the rapidly-adapting II (RA II) nervous discharge that are capable of following high frequency of vibratory stimuli, and allow perception of distant events through transmitted vibrations.45 Pacinian corpuscle afferents respond to sustained indentation with transient activity at the onset and offset of the stimulus. They are also called acceleration detectors because they can detect changes in the strength of the stimulus and, if the rate of change in the stimulus is altered (as happens in vibrations), their response becomes proportional to this change. Pacinian corpuscles sense gross pressure changes and most of all vibrations (150–300 Hz), which they can detect even centimeters away [Fig. 2 (A3)].
Tonic response was observed in decapsulated Pacinian corpuscle.46 In addition, intact Pacinian corpuscles respond with sustained activity during constant indentation stimuli, without altering mechanical thresholds or response frequency when GABA-mediated signaling is blocked between lamellate glia and a nerve ending.47 Thus, the non-neuronal components of the Pacinian corpuscle may have dual roles in filtering the mechanical stimulus as well as in modulating the response properties of the sensory neurone.
Spinal cord projections. Projections of the Aβ-LTMRs in the spinal cord are divided in two branches. The principal central branch ascends in the spinal cord in the ipsilateral dorsal columns to the cervical level [Fig. 1 (B3)]. Secondary branches terminate in the dorsal horn in the laminae IV and interfere with the pain transmission, for example. This may attenuate pain as a part of the gate control [Fig. 1 (B4)].48
At cervical levels, axones of the principal branch separate in two tracts: the midline tract comprises the gracile fascicle conveying information from the lower half of the body (legs and trunk), and the outer tract comprises the cuneate fascicle conveying information from the upper half of the body (arms and trunk) [Fig. 1 (B5)].
Primary tactile afferents make their first synapse with second order neurones at the medulla where fibers from each tract synapse in a nucleus of the same name: the gracile fasciculus axones synapse in the gracile nucleus and the cuneate axones synapse in the cuneate nucleus [Fig. 1 (B6)]. Neurones receiving the synapse provide the secondary afferents and cross the midline immediately to form a tract on the contralateral side of the brainstem—the medial lemniscus—which ascends through the brainstem to the next relay station in the midbrain, specifically, in the thalamus [Fig. 1 (B7)].
Molecular specification of LTMRs. Molecular mechanisms controlling the early diversification of LTMRs have been recently partly elucidated. Bourane and collaborators have shown that the neuronal populations expressing the Ret tyrosine kinase receptor (Ret) and its co-receptor GFRα2 in E11–13 embryonic mice DRG selectively coexpress the transcription factor Mafa.49,50 These authors demonstrate that the Mafa/Ret/GFRα2 neurones destined to become three specific types of LTRMs at birth: the SA1 neurones innervating Merkel-cell complexes, the rapidly adapting neurones innervating Meissner corpuscles and the rapidly adapting afferents (RA I) forming lanceolate endings around hair follicles. Ginty and collaborators also report that DRG neurones expressing early-Ret are rapidly adapting mechanoreceptors from Meissner corpuscles, Pacinian corpuscles and lanceolate endings around hair follicles.51 They innervate discrete target zones within the gracile and cuneate nuclei, revealing a modality-specific pattern of mechanosensory neurone axonal projections within the brainstem.
Exploration of human skin mechanoreceptors. The technique of “microneurography” described by Hagbarth and Vallbo in 1968 has been applied to study the discharge behavior of single human mechanosensitive endings supplying muscle, joint and skin (see for review Macefield, 2005).52,53 The majority of human skin microneurography studies have characterized the physiology of tactile afferents in the glabrous skin of the hand. Microelectrode recordings from the median and ulnar nerves in human subjects have revealed touch sensation generated by the four classes of LTMRs: Meissner afferents are particularly sensitive to light stroking across the skin, responding to local shear forces and incipient or overt slips within the receptive field. Pacinian afferents are exquisitively sensitive to brisk mechanical transients. Afferents respond vigorously to blowing over the receptive field. A Pacinian corpuscle located in a digit will usually respond to tapping the table supporting the arm. Merkel afferents characteristically have a high dynamic sensitivity to indentation stimuli applied to a discrete area and often respond with an off-discharge during release. Although the Ruffini afferents do respond to forces applied normally to the skin, a unique feature of SA II afferents is their capacity to respond also to lateral skin stretch. Finally, hair units in the forearm have large ovoid or irregular receptive fields composed of multiple sensitive spots that corresponded to individual hairs (each afferent supply ~20 hairs).
Mechanical Sensitivity of Keratinocytes
Any mechanical stimulus on the skin must be transmitted through keratinocytes that form the epidermis. These ubiquitous cells may perform signaling functions in addition to their supportive or protective roles. For example, keratinocytes secrete ATP, an important sensory signaling molecule, in response to mechanical and osmotic stimuli.54,55 The release of ATP induces intracellular calcium increase by autocrine stimulation of purinergic receptors.55 Furthermore, there is evidence that hypotonicity activates the Rho-kinase signaling pathway and the subsequent F-actin stress fiber formation suggesting that the mechanical deformation of the keratinocytes may mechanically interfere with the neighbor cells such as Merkel cells for innocuous touch and C-fiber free endings for noxious touch [Fig. 1 (C6)].56,57
High threshold mechanoreceptors (HTMRs) are epidermal C- and Aδ free nerve-endings. They are not associated with specialized structures and are observed in both hairy skin [Fig. 1 (A9)] and glabrous skin [Fig. 1(C7)]. However, the term of free nerve-ending has to be considered prudently since nerve endings are always in close apposition with keratinocyte or Langherans’ cell or melanocytes. Ultrastructural analysis of nerve endings reveals the presence of rough endoplasmic reticulum, abundant mitochondria and dense-core vesicle. Adjacent membranes of epidermal cells are thickened and resembling post-synaptic membrane in nervous tissues. Note that the interactions between nerve endings and epidermal cells may be bidirectional since epidermal cells may release mediators as ATP, interleukine (IL6, IL10) and bradykinin and conversely peptidergic nerve endings may release peptides such as CGRP or substance P acting on epidermal cells. HTMRs comprise mechano-nociceptors excited only by noxious mechanical stimuli and polymodal nociceptors that also respond to noxious heat and exogenous chemical [Fig. 2 (B2)].58
HTMR afferent fibers terminate on projection neurones in the dorsal horn of the spinal cord. Aδ-HTMRs contact second order neurones predominantly in the lamina I and V, whereas C-HTMRs terminate in the lamina II [Fig. 1 (B8)]. Second order nociceptive neurones project to the controlateral side of the spinal cord and ascend in the white matter, forming the anterolateral system. These neurones terminate mainly in the thalamus [Fig. 1 (B9 and B10)].
Mechano-Currents in Somatosensory Neurones
The mechanisms of slow or rapid adaptation of mechanoreceptors are not yet elucidated. It is not clear to what extent mechanoreceptor adaptation is provided by the cellular environment of the sensory nerve ending, the intrinsic properties of the mechanically-gated channels and the properties of the axonal voltage-gated ion channels in sensory neurones (Fig. 2). However, recent progress in the characterization of mechanically-gated currents has demonstrated that different classes of mechanosensitive channels exist in DRG neurones and may explain some aspects of the adaptation of mechanoreceptors.
In vitro recording in rodents has shown that the soma of DRG neurons is intrinsically mechanosensitive and express cationic mechano-gated currents.59-64 Gadolinium and ruthenium red fully block mechanosensitive currents, whereas external calcium and magnesium, at physiological concentrations, as well as amiloride and benzamil, cause partial block.60,62,63 FM1-43 acts as a lasting blocker, and the injection of FM1-43 into the hind paw of mice decreases pain sensitivity in the Randall–Selitto test and increases the paw withdrawal threshold assessed with von Frey hairs.65
In response to sustained mechanical stimulation, mechanosensitive currents decline through closure. Based on the time constants of current decay, four distinct types of mechanosensitive currents have been distinguished: rapidly adapting currents (~3–6 ms), intermediately adapting currents (~15–30 ms), slowly adapting currents (~200–300 ms) and ultra-slowly adapting currents (~1000 ms).64 All these currents are present with variable incidence in rat DRG neurones innervating the glabrous skin of the hindpaw.64
The mechanical sensitivity of mechanosensitive currents can be determined by applying a series of incremental mechanical stimuli, allowing for relatively detailed stimulus-current analysis.66 The stimulus–current relationship is typically sigmoidal, and the maximum amplitude of the current is determined by the number of channels that are simultaneously open.64,67 Interestingly, the rapidly adapting mechanosensitive current has been reported to display low mechanical threshold and half-activation midpoint compared with the ultra-slowly adapting mechanosensitive current.63,65
Sensory neurones with non-nociceptive phenotypes preferentially express rapidly adapting mechanosensitive currents with lower mechanical threshold.60,61,63,64,68 Conversely, slowly and ultra-slowly adapting mechanosensitive currents are occasionally reported in putative non-nociceptive cells.64,68 This prompted suggestion that these currents might contribute to the different mechanical thresholds seen in LTMRs and HTMRs in vivo. Although these in vitro experiments should be taken with caution, support for the presence in the soma of the DRG neurones of low- and high-threshold mechanotransducers was also provided by radial stretch-based stimulation of cultured mouse sensory neurones.69 This paradigm revealed two main populations of stretch-sensitive neurones, one that responds to low stimulus amplitude and another one that selectively responds to high stimulus amplitude.
These results have important, yet speculative, mechanistic implications: the mechanical threshold of sensory neurones might have little to do with the cellular organization of the mechanoreceptor but may lie in the properties of the mechanically-gated ion channels.
The mechanisms that underlie desensitization of mechanosensitive cation currents in rat DRG neurones have been recently unraveled.64,67 It results from two concurrent mechanisms that affect channel properties: adaptation and inactivation. Adaptation was first reported in auditory hair cell studies. It can be described operationally as a simple translation of the transducer channel’s activation curve along the mechanical stimulus axis.70-72 Adaptation allows sensory receptors to maintain their sensitivity to new stimuli in the presence of an existing stimulus. However, a substantial fraction of mechanosensitive currents in DRG neurones cannot be reactivated following conditioning mechanical stimulation, indicating inactivation of some transducer channels.64,67 Therefore, both inactivation and adaptation act in tandem to regulate mechanosensitive currents. These two mechanisms are common to all mechanosensitive currents identified in rat DRG neurones, suggesting that related physicochemical elements determine the kinetics of these channels.64
In conclusion, determining the properties of endogenous mechanosensitive currents in vitro is crucial in the quest to identify transduction mechanisms at the molecular level. The variability observed in the mechanical threshold and the adapting kinetics of the different mechanically-gated currents in DRG neurones suggest that intrinsic properties of ion channels may explain, at least in part, mechanical threshold and adaptation kinetics of the mechanoreceptors described in the decades 1960–80 using ex vivo preparations.
Putative Mechanosensitive Proteins
Mechanosensitive ion currents in somatosensory neurones are well characterized, by contrast, little is known about the identity of molecules that mediate mechanotransduction in mammals. Genetic screens in Drosophila and C. elegans have identified candidate mechanotransduction molecules, including the TRP and degenerin/epithelial Na+ channel (Deg/ENaC) families.73 Recent attempts to elucidate the molecular basis of mechanotransduction in mammals have largely focused on homologs of these candidates. Additionally, many of these candidates are present in cutaneous mechanoreceptors and somatosensory neurones (Fig. 2).
Acid-Sensing Ion Channels
ASICs belong to a proton-gated subgroup of the degenerin–epithelial Na+ channel family.74 Three members of the ASIC family (ASIC1, ASIC2 and ASIC3) are expressed in mechanoreceptors and nociceptors. The role of ASIC channels has been investigated in behavioral studies using mice with targeted deletion of ASIC channel genes. Deletion of ASIC1 does not alter the function of cutaneous mechanoreceptors but increases mechanical sensitivity of afferents innervating the gut.75 ASIC2 knockout mice exhibit a decreased sensitivity of rapidly adapting cutaneous LTMRs.76 However, subsequent studies reported a lack of effects of knocking out ASIC2 on both visceral mechano-nociception and cutaneous mechanosensation.77 ASIC3 disruption decreases mechano sensitivity of visceral afferents and reduces responses of cutaneous HTMRs to noxious stimuli.76
The Transient Receptor Channel
THE TRP superfamily is subdivided into six subfamilies in mammals.78 Nearly all TRP subfamilies have members linked to mechanosensation in a variety of cell systems.79 In mammalian sensory neurones, however, TRP channels are best known for sensing thermal information and mediating neurogenic inflammation, and only two TRP channels, TRPV4 and TRPA1, have been implicated in touch responsiveness. Disrupting TRPV4 expression in mice has only modest effects on acute mechanosensory thresholds, but strongly reduces sensitivity to noxious mechanical stimuli.80,81 TRPV4 is a crucial determinant in shaping the response of nociceptive neurones to osmotic stress and to mechanical hyperalgesia during inflammation.82,83 TRPA1 seems to have a role in mechanical hyperalgesia. TRPA1-deficient mice exhibit pain hypersensitivity. TRPA1 contributes to the transduction of mechanical, cold and chemical stimuli in nociceptor sensory neurones but it appears that is not essential for hair-cell transduction.84,85
There is no clear evidence indicating that TRP channels and ASICs channels expressed in mammals are mechanically gated. None of these channels expressed heterologously recapitulates the electrical signature of mechanosensitive currents observed in their native environment. This does not rule out the possibility that ASICs and TRPs channels are mechanotransducers, given the uncertainty of whether a mechanotransduction channel may function outside of its cellular context (see section on SLP3).
Piezo protiens have been recently identified like as promising candidates for mechanosensing proteins by Coste and collaborators.86,87 Vertebrates have two Piezo members, Piezo 1 and Piezo 2, previously known as FAM38A and FAM38B, respectively, which are well conserved throughout multi cellular eukaryotes. Piezo 2 is abundant in DRGs, whereas Piezo 1 is barely detectable. Piezo-induced mechanosensitive currents are prevented inhibited by gadolinium, ruthenium red and GsMTx4 (a toxin from the tarantula Grammostola spatulata).88 Expression of Piezo 1 or Piezo 2 in heterologous systems produces mechanosensitive currents, the kinetics of inactivation of Piezo 2 current being faster than Piezo 1. Similar to endogenous mechanosensitive currents, Piezo-dependent currents have reversal potentials around 0 mV and are cation no selective, with Na+, K+, Ca2+ and Mg2+ all permeating the underlying channel. Likewise, piezo-dependent currents are regulated by membrane potential, with a marked slowing of current kinetics at depolarized potentials.86
Piezo proteins are undoubtedly mechanosensing proteins and share many properties of rapidly adapting mechanosensitive currents in sensory neurones. Treatment of cultured DRG neurones with Piezo 2 short interfering RNA decreased the proportion of neurones with rapidly adapting current and decreased the percentage of mechanosensitive neurones.86 Transmembrane domains are located throughout the piezo proteins but no obvious pore-containing motifs or ion channel signatures have been identified. However, mouse Piezo 1 protein purified and reconstituted into asymmetric lipid bilayers and liposome forms ion channels sensitive to ruthenium red.87 An essential step in validating mechanotransduction through Piezo channels is to use in vivo approaches to determine the functional importance in touch signaling. Information was given in Drosophila where deletion of the single Piezo member reduced mechanical response to noxious stimuli, without affecting normal touch.89 Although their structure remains to be determined, this novel family of mechanosensitive proteins is a promising subject for future research, beyond the border of touch sensation. For exemple, a recent study on patients with anemia (hereditary xerocytosis) shows the role of Piezo 1 in maintaining erythrocyte volume homeostasis.90
Transmembrane Channel-Like (TMC)
A recent study indicates that two proteins, TMC1 and TMC2, are necessary for hair cell mechanotransduction.91 Hereditary deafness due to TMC1 gene mutation was reported in human and mice.92,93 Presence of these channels had not yet been shown in the somatosensory system, but it seems to be a good lead to investigate.
Stomatin-Like Protein 3 (SLP3)
Additionally to the transduction channels, some accessory proteins linked to the channel have been shown to play a role in touch sensivity. SLP3 is expressed in mammalian DRG neurones. Studies using mutant mice lacking SLP3 had shown change in mechanosensation and mechanosentive currents.94,95 SLP3 precise function remains unknown. It may be a linker between the mechanosensitive channel and the underlying microtubules, as proposed for its C. elegans homolog MEC2.96 Recently GR. Lewin lab has suggested that a tether is synthesized by DRG sensory neurones and links mechanosensitive ion channel to the extracellular matrix.97 Disrupting the link abolishes the RA-mechanosensitive current suggesting that some ion channels are mechanosensitive only when tethered. RA-mechanosensitive currents are also inhibited by laminin-332, a matrix protein produced by keratinocytes, reinforcing the hypothesis of a modulation of the mechanosensitive current by extracellular proteins.98
K+ Channel Subfamily
In parallel to cationic depolarizing mechanosensitive currents, the presence of repolarizing mechanosensitive K+ currents is under investigation. K+ channels in mechanosensitive cells can step in the current balance and contribute to define the mechanical threshold and the time course of adaptation of mechanoreceptors.
KCNK members belong to the two-pore domain K+ channel (K2P) family.99,100 The K2P display a remarkable range of regulation by cellular, physical and pharmacological agents, including pH changes, heat, stretch and membrane deformation. These K2P are active at resting membrane potential. Several KCNK subunits are expressed in somatosensory neurones.101 KCNK2 (TREK-1), KCNK4 (TRAAK) and TREK-2 channels are among the few channels for which a direct mechanical gating by membrane stretch has been shown.102,103
Mice with a disrupted KCNK2 gene displayed an enhanced sensitivity to heat and mild mechanical stimuli but a normal withdrawal threshold to noxious mechanical pressure applied to the hindpaw using the Randall–Selitto test.104 KCNK2-deficient mice also displays increased thermal and mechanical hyperalgesia in inflammatory conditions. KCNK4 knockout mice were hypersensitive to mild mechanical stimulation, and this hypersensitivity was increased by additional inactivation of KCNK2.105 Increased mechanosensitivity of these knockout mice could mean that stretch normally activates both depolarizing and repolarizing mechanosensitive currents in a coordinated way, similarly to the unbalance of depolarizing and repolarizing voltage-gated currents.
KCNK18 (TRESK) is a major contributor to the background K+ conductance that regulates the resting membrane potential of somatosensory neurones.106 Although it is not known if KCNK18 is directly sensitive to mechanical stimulation, it may play a role in mediating responses to light touch, as well as painful mechanical stimuli. KCNK18 and to a lesser extent KCNK3, are proposed to be the molecular target of hydroxy-α-sanshool, a compound found in Schezuan peppercorns that activates touch receptors and induces a tingling sensation in humans.107,108
The voltage dependent K+ channel KCNQ4 (Kv7.4) is crucial for setting the velocity and frequency preference of a subpopulation of rapidly adapting mechanoreceptors in both mice and humans. Mutation of KCNQ4 has been initially associated with a form of hereditary deafness. Interestingly a recent study localizes KCNQ4 in the peripheral nerve endings of cutaneous rapidly adapting hair follicle and Meissner corpuscle. Accordingly, loss of KCNQ4 function leads to a selective enhancement of mechanoreceptor sensitivity to low-frequency vibration. Notably, people with late-onset hearing loss due to dominant mutations of the KCNQ4 gene show enhanced performance in detecting small-amplitude, low-frequency vibration.109
Dr. Alex Jimenez’s Insight
Touch is considered to be one of the most complex senses in the human body, particularly because there is no specific organ in charge of it. Instead, the sense of touch occurs through sensory receptors, known as mechanoreceptors, which are found across the skin and respond to mechanical pressure or distortion. There are four main types of mechanoreceptors in the glabrous, or hairless, skin of mammals: lamellar corpuscles, tactile corpuscles, Merkel nerve endings and bulbous corpuscles. Mechanoreceptors function in order to allow the detection of touch, in order to monitor the position of the muscles, bones and joints, known as proprioception, and even to detect sounds and the motion of the body. Understanding the mechanisms of structure and function of these mechanoreceptors is a fundamental element in the utilization of treatments and therapies for pain management.
Touch is a complex sense because it represents different tactile qualities, namely, vibration, shape, texture, pleasure and pain, with different discriminative performances. Up to now, the correspondence between a touch-organ and the psychophysical sense was correlative and class-specific molecular markers are just emerging. The development of rodent tests matching the diversity of touch behavior is now required to facilitate future genomics identification. The use of mice that lack specific subsets of sensory afferent types will greatly facilitate identification of mechanoreceptors and sensory afferent fibers associated with a particular touch modality. Interestingly, a recent paper opens the important question of the genetic basis of mechanosensory traits in human and suggests that single gene mutation could negatively influence touch sensitivity.110 This underlines that the pathophysiology of the human touch deficit is in a large part unknown and would certainly progress by identifying precisely the subset of sensory neurones linked to a touch modality or a touch deficit.
In return, progress has been made to define the biophysical properties of the mechano-gated currents.64 The development of new techniques in recent years, allowing monitoring of membrane tension changes, while recording mechano-gated current, has proved valuable experimental method to describe mechanosensitive currents with rapid, intermediate and slow adaptation (reviewed in Delmas and collaborators).66,111 The future will be to determine the role of the current properties in the mechanisms of adaptation of functionally diverse mechanoreceptors and the contribution of mechanosensitive K+ currents to the excitability of LTMRs and HTMRs.
The molecular nature of mechano-gated currents in mammals is also a future promising research topic. Future research will progress in two perspectives, first to determine the role of accessory molecule that tether channels to the cytoskeleton and would be required to confer or regulate mechanosensitivity of ion channels of the like of TRP and ASIC/EnaC families. Second, to investigate the large and promising area of the contribution of the Piezo channels by answering key questions, relative to the permeation and gating mechanisms, the subset of sensory neurones and touch modalities involving Piezo and the role of Piezo in non neuronal cells associated with mechanosensation.
The sense of touch, in comparison to that of sight, taste, sound and smell, which utilize specific organs to process these sensations, can occur all throughout the body through tiny receptors known as mechanoreceptors. Different types of mechanoreceptors can be found in various layers of the skin, where they can detect a wide array of mechanical stimulation. The article above describes specific highlights which demonstrate the progress of structural and functional mechanisms of mechanoreceptors associated with the sense of touch. Information referenced from the National Center for Biotechnology Information (NCBI). The scope of our information is limited to chiropractic as well as to spinal injuries and conditions. To discuss the subject matter, please feel free to ask Dr. Jimenez or contact us at 915-850-0900 .
Curated by Dr. Alex Jimenez
1. Moriwaki K, Yuge O. Topographical features of cutaneous tactile hypoesthetic and hyperesthetic abnormalities in chronic pain. Pain. 1999;81:1–6. doi: 10.1016/S0304-3959(98)00257-7. [PubMed] [Cross Ref]
2. Shim B, Kim DW, Kim BH, Nam TS, Leem JW, Chung JM. Mechanical and heat sensitization of cutaneous nociceptors in rats with experimental peripheral neuropathy. Neuroscience. 2005;132:193–201. doi: 10.1016/j.neuroscience.2004.12.036. [PubMed] [Cross Ref]
3. Kleggetveit IP, Jørum E. Large and small fiber dysfunction in peripheral nerve injuries with or without spontaneous pain. J Pain. 2010;11:1305–10. doi: 10.1016/j.jpain.2010.03.004. [PubMed] [Cross Ref]
4. Noback CR. Morphology and phylogeny of hair. Ann N Y Acad Sci. 1951;53:476–92. doi: 10.1111/j.1749-6632.1951.tb31950.x. [PubMed] [Cross Ref]
5. Straile WE. Atypical guard-hair follicles in the skin of the rabbit. Nature. 1958;181:1604–5. doi: 10.1038/1811604a0. [PubMed] [Cross Ref]
6. Straile WE. The morphology of tylotrich follicles in the skin of the rabbit. Am J Anat. 1961;109:1–13. doi: 10.1002/aja.1001090102. [PubMed] [Cross Ref]
7. Millard CL, Woolf CJ. Sensory innervation of the hairs of the rat hindlimb: a light microscopic analysis. J Comp Neurol. 1988;277:183–94. doi: 10.1002/cne.902770203. [PubMed] [Cross Ref]
8. Hamann W. Mammalian cutaneous mechanoreceptors. Prog Biophys Mol Biol. 1995;64:81–104. doi: 10.1016/0079-6107(95)00011-9. [Review] [PubMed] [Cross Ref]
9. Brown AG, Iggo A. A quantitative study of cutaneous receptors and afferent fibres in the cat and rabbit. J Physiol. 1967;193:707–33. [PMC free article] [PubMed]
10. Burgess PR, Petit D, Warren RM. Receptor types in cat hairy skin supplied by myelinated fibers. J Neurophysiol. 1968;31:833–48. [PubMed]
11. Driskell RR, Giangreco A, Jensen KB, Mulder KW, Watt FM. Sox2-positive dermal papilla cells specify hair follicle type in mammalian epidermis. Development. 2009;136:2815–23. doi: 10.1242/dev.038620. [PMC free article] [PubMed] [Cross Ref]
12. Hussein MA. The overall pattern of hair follicle arrangement in the rat and mouse. J Anat. 1971;109:307–16. [PMC free article] [PubMed]
13. Vielkind U, Hardy MH. Changing patterns of cell adhesion molecules during mouse pelage hair follicle development. 2. Follicle morphogenesis in the hair mutants, Tabby and downy. Acta Anat (Basel) 1996;157:183–94. doi: 10.1159/000147880. [PubMed] [Cross Ref]
14. Hardy MH, Vielkind U. Changing patterns of cell adhesion molecules during mouse pelage hair follicle development. 1. Follicle morphogenesis in wild-type mice. Acta Anat (Basel) 1996;157:169–82. doi: 10.1159/000147879. [PubMed] [Cross Ref]
15. Li L, Rutlin M, Abraira VE, Cassidy C, Kus L, Gong S, et al. The functional organization of cutaneous low-threshold mechanosensory neurons. Cell. 2011;147:1615–27. doi: 10.1016/j.cell.2011.11.027. [PMC free article] [PubMed] [Cross Ref]
16. Brown AG, Iggo A. A quantitative study of cutaneous receptors and afferent fibres in the cat and rabbit. J Physiol. 1967;193:707–33. [PMC free article] [PubMed]
17. Burgess PR, Petit D, Warren RM. Receptor types in cat hairy skin supplied by myelinated fibers. J Neurophysiol. 1968;31:833–48. [PubMed]
18. Vallbo A, Olausson H, Wessberg J, Norrsell U. A system of unmyelinated afferents for innocuous mechanoreception in the human skin. Brain Res. 1993;628:301–4. doi: 10.1016/0006-8993(93)90968-S. [PubMed] [Cross Ref]
19. Vallbo AB, Olausson H, Wessberg J. Unmyelinated afferents constitute a second system coding tactile stimuli of the human hairy skin. J Neurophysiol. 1999;81:2753–63. [PubMed]
20. Hertenstein MJ, Keltner D, App B, Bulleit BA, Jaskolka AR. Touch communicates distinct emotions. Emotion. 2006;6:528–33. doi: 10.1037/1528-35184.108.40.2068. [PubMed] [Cross Ref]
21. McGlone F, Vallbo AB, Olausson H, Loken L, Wessberg J. Discriminative touch and emotional touch. Can J Exp Psychol. 2007;61:173–83. doi: 10.1037/cjep2007019. [PubMed] [Cross Ref]
22. Wessberg J, Olausson H, Fernström KW, Vallbo AB. Receptive field properties of unmyelinated tactile afferents in the human skin. J Neurophysiol. 2003;89:1567–75. doi: 10.1152/jn.00256.2002. [PubMed] [Cross Ref]
23. Liu Q, Vrontou S, Rice FL, Zylka MJ, Dong X, Anderson DJ. Molecular genetic visualization of a rare subset of unmyelinated sensory neurons that may detect gentle touch. Nat Neurosci. 2007;10:946–8. doi: 10.1038/nn1937. [PubMed] [Cross Ref]
24. Olausson H, Lamarre Y, Backlund H, Morin C, Wallin BG, Starck G, et al. Unmyelinated tactile afferents signal touch and project to insular cortex. Nat Neurosci. 2002;5:900–4. doi: 10.1038/nn896. [PubMed] [Cross Ref]
25. Olausson H, Wessberg J, Morrison I, McGlone F, Vallbo A. The neurophysiology of unmyelinated tactile afferents. Neurosci Biobehav Rev. 2010;34:185–91. doi: 10.1016/j.neubiorev.2008.09.011. [Review] [PubMed] [Cross Ref]
26. Krämer HH, Lundblad L, Birklein F, Linde M, Karlsson T, Elam M, et al. Activation of the cortical pain network by soft tactile stimulation after injection of sumatriptan. Pain. 2007;133:72–8. doi: 10.1016/j.pain.2007.03.001. [PubMed] [Cross Ref]
27. Applebaum AE, Beall JE, Foreman RD, Willis WD. Organization and receptive fields of primate spinothalamic tract neurons. J Neurophysiol. 1975;38:572–86. [PubMed]
28. White JC, Sweet WH. Effectiveness of chordotomy in phantom pain after amputation. AMA Arch Neurol Psychiatry. 1952;67:315–22. [PubMed]
29. Halata Z, Grim M, Bauman KI. Friedrich Sigmund Merkel and his “Merkel cell”, morphology, development, and physiology: review and new results. Anat Rec A Discov Mol Cell Evol Biol. 2003;271:225–39. doi: 10.1002/ar.a.10029. [PubMed] [Cross Ref]
30. Morrison KM, Miesegaes GR, Lumpkin EA, Maricich SM. Mammalian Merkel cells are descended from the epidermal lineage. Dev Biol. 2009;336:76–83. doi: 10.1016/j.ydbio.2009.09.032. [PMC free article] [PubMed] [Cross Ref]
31. Van Keymeulen A, Mascre G, Youseff KK, Harel I, Michaux C, De Geest N, et al. Epidermal progenitors give rise to Merkel cells during embryonic development and adult homeostasis. J Cell Biol. 2009;187:91–100. doi: 10.1083/jcb.200907080. [PMC free article] [PubMed] [Cross Ref]
32. Ebara S, Kumamoto K, Baumann KI, Halata Z. Three-dimensional analyses of touch domes in the hairy skin of the cat paw reveal morphological substrates for complex sensory processing. Neurosci Res. 2008;61:159–71. doi: 10.1016/j.neures.2008.02.004. [PubMed] [Cross Ref]
33. Guinard D, Usson Y, Guillermet C, Saxod R. Merkel complexes of human digital skin: three-dimensional imaging with confocal laser microscopy and double immunofluorescence. J Comp Neurol. 1998;398:98–104. doi: 10.1002/(SICI)1096-9861(19980817)398:1<98::AID-CNE6>3.0.CO;2-4. [PubMed] [Cross Ref]
34. Reinisch CM, Tschachler E. The touch dome in human skin is supplied by different types of nerve fibers. Ann Neurol. 2005;58:88–95. doi: 10.1002/ana.20527. [PubMed] [Cross Ref]
35. Maricich SM, Morrison KM, Mathes EL, Brewer BM. Rodents rely on Merkel cells for texture discrimination tasks. J Neurosci. 2012;32:3296–300. doi: 10.1523/JNEUROSCI.5307-11.2012. [PMC free article] [PubMed] [Cross Ref]
36. Ikeda I, Yamashita Y, Ono T, Ogawa H. Selective phototoxic destruction of rat Merkel cells abolishes responses of slowly adapting type I mechanoreceptor units. J Physiol. 1994;479:247–56. [PMC free article] [PubMed]
37. Maricich SM, Wellnitz SA, Nelson AM, Lesniak DR, Gerling GJ, Lumpkin EA, et al. Merkel cells are essential for light-touch responses. Science. 2009;324:1580–2. doi: 10.1126/science.1172890. [PMC free article] [PubMed] [Cross Ref]
38. Diamond J, Holmes M, Nurse CA. Are Merkel cell-neurite reciprocal synapses involved in the initiation of tactile responses in salamander skin? J Physiol. 1986;376:101–20. [PMC free article] [PubMed]
39. Yamashita Y, Akaike N, Wakamori M, Ikeda I, Ogawa H. Voltage-dependent currents in isolated single Merkel cells of rats. J Physiol. 1992;450:143–62. [PMC free article] [PubMed]
40. Wellnitz SA, Lesniak DR, Gerling GJ, Lumpkin EA. The regularity of sustained firing reveals two populations of slowly adapting touch receptors in mouse hairy skin. J Neurophysiol. 2010;103:3378–88. doi: 10.1152/jn.00810.2009. [PMC free article] [PubMed] [Cross Ref]
41. Nandasena BG, Suzuki A, Aita M, Kawano Y, Nozawa-Inoue K, Maeda T. Immunolocalization of aquaporin-1 in the mechanoreceptive Ruffini endings in the periodontal ligament. Brain Res. 2007;1157:32–40. doi: 10.1016/j.brainres.2007.04.033. [PubMed] [Cross Ref]
42. Rahman F, Harada F, Saito I, Suzuki A, Kawano Y, Izumi K, et al. Detection of acid-sensing ion channel 3 (ASIC3) in periodontal Ruffini endings of mouse incisors. Neurosci Lett. 2011;488:173–7. doi: 10.1016/j.neulet.2010.11.023. [PubMed] [Cross Ref]
43. Johnson KO. The roles and functions of cutaneous mechanoreceptors. Curr Opin Neurobiol. 2001;11:455–61. doi: 10.1016/S0959-4388(00)00234-8. [Review] [PubMed] [Cross Ref]
44. Wende H, Lechner SG, Cheret C, Bourane S, Kolanczyk ME, Pattyn A, et al. The transcription factor c-Maf controls touch receptor development and function. Science. 2012;335:1373–6. doi: 10.1126/science.1214314. [PubMed] [Cross Ref]
45. Mendelson M, Lowenstein WR. Mechanisms of receptor adaptation. Science. 1964;144:554–5. doi: 10.1126/science.144.3618.554. [PubMed] [Cross Ref]
46. Loewenstein WR, Mendelson M. Components of receptor adaptation in a pacinian corpuscle. J Physiol. 1965;177:377–97. [PMC free article] [PubMed]
47. Pawson L, Prestia LT, Mahoney GK, Güçlü B, Cox PJ, Pack AK. GABAergic/glutamatergic-glial/neuronal interaction contributes to rapid adaptation in pacinian corpuscles. J Neurosci. 2009;29:2695–705. doi: 10.1523/JNEUROSCI.5974-08.2009. [PMC free article] [PubMed] [Cross Ref]
48. Basbaum AI, Jessell TM. The perception of pain. In: Kandel ER, Schwartz JH, Jessell TM, eds. Principles of neural science. Fourth edition. The McGraw-Hill compagies, 2000: 472-490.
49. Bourane S, Garces A, Venteo S, Pattyn A, Hubert T, Fichard A, et al. Low-threshold mechanoreceptor subtypes selectively express MafA and are specified by Ret signaling. Neuron. 2009;64:857–70. doi: 10.1016/j.neuron.2009.12.004. [PubMed] [Cross Ref]
50. Kramer I, Sigrist M, de Nooij JC, Taniuchi I, Jessell TM, Arber S. A role for Runx transcription factor signaling in dorsal root ganglion sensory neuron diversification. Neuron. 2006;49:379–93. doi: 10.1016/j.neuron.2006.01.008. [PubMed] [Cross Ref]
51. Luo W, Enomoto H, Rice FL, Milbrandt J, Ginty DD. Molecular identification of rapidly adapting mechanoreceptors and their developmental dependence on ret signaling. Neuron. 2009;64:841–56. doi: 10.1016/j.neuron.2009.11.003. [PMC free article] [PubMed] [Cross Ref]
52. Vallbo AB, Hagbarth KE. Activity from skin mechanoreceptors recorded percutaneously in awake human subjects. Exp Neurol. 1968;21:270–89. doi: 10.1016/0014-4886(68)90041-1. [PubMed] [Cross Ref]
53. Macefield VG. Physiological characteristics of low-threshold mechanoreceptors in joints, muscle and skin in human subjects. Clin Exp Pharmacol Physiol. 2005;32:135–44. doi: 10.1111/j.1440-1681.2005.04143.x. [Review] [PubMed] [Cross Ref]
54. Koizumi S, Fujishita K, Inoue K, Shigemoto-Mogami Y, Tsuda M, Inoue K. Ca2+ waves in keratinocytes are transmitted to sensory neurons: the involvement of extracellular ATP and P2Y2 receptor activation. Biochem J. 2004;380:329–38. doi: 10.1042/BJ20031089. [PMC free article] [PubMed] [Cross Ref]
55. Azorin N, Raoux M, Rodat-Despoix L, Merrot T, Delmas P, Crest M. ATP signalling is crucial for the response of human keratinocytes to mechanical stimulation by hypo-osmotic shock. Exp Dermatol. 2011;20:401–7. doi: 10.1111/j.1600-0625.2010.01219.x. [PubMed] [Cross Ref]
56. Amano M, Fukata Y, Kaibuchi K. Regulation and functions of Rho-associated kinase. Exp Cell Res. 2000;261:44–51. doi: 10.1006/excr.2000.5046. [Review] [PubMed] [Cross Ref]
57. Koyama T, Oike M, Ito Y. Involvement of Rho-kinase and tyrosine kinase in hypotonic stress-induced ATP release in bovine aortic endothelial cells. J Physiol. 2001;532:759–69. doi: 10.1111/j.1469-7793.2001.0759e.x. [PMC free article] [PubMed] [Cross Ref]
58. Perl ER. Cutaneous polymodal receptors: characteristics and plasticity. Prog Brain Res. 1996;113:21–37. doi: 10.1016/S0079-6123(08)61079-1. [Review] [PubMed] [Cross Ref]
59. McCarter GC, Reichling DB, Levine JD. Mechanical transduction by rat dorsal root ganglion neurons in vitro. Neurosci Lett. 1999;273:179–82. doi: 10.1016/S0304-3940(99)00665-5. [PubMed] [Cross Ref]
60. Drew LJ, Wood JN, Cesare P. Distinct mechanosensitive properties of capsaicin-sensitive and -insensitive sensory neurons. J Neurosci. 2002;22:RC228. [PubMed]
61. Drew LJ, Rohrer DK, Price MP, Blaver KE, Cockayne DA, Cesare P, et al. Acid-sensing ion channels ASIC2 and ASIC3 do not contribute to mechanically activated currents in mammalian sensory neurones. J Physiol. 2004;556:691–710. doi: 10.1113/jphysiol.2003.058693. [PMC free article] [PubMed] [Cross Ref]
62. McCarter GC, Levine JD. Ionic basis of a mechanotransduction current in adult rat dorsal root ganglion neurons. Mol Pain. 2006;2:28. doi: 10.1186/1744-8069-2-28. [PMC free article] [PubMed] [Cross Ref]
63. Coste B, Crest M, Delmas P. Pharmacological dissection and distribution of NaN/Nav1.9, T-type Ca2+ currents, and mechanically activated cation currents in different populations of DRG neurons. J Gen Physiol. 2007;129:57–77. doi: 10.1085/jgp.200609665. [PMC free article] [PubMed] [Cross Ref]
64. Hao J, Delmas P. Multiple desensitization mechanisms of mechanotransducer channels shape firing of mechanosensory neurons. J Neurosci. 2010;30:13384–95. doi: 10.1523/JNEUROSCI.2926-10.2010. [PubMed] [Cross Ref]
65. Drew LJ, Wood JN. FM1-43 is a permeant blocker of mechanosensitive ion channels in sensory neurons and inhibits behavioural responses to mechanical stimuli. Mol Pain. 2007;3:1. doi: 10.1186/1744-8069-3-1. [PMC free article] [PubMed] [Cross Ref]
66. Hao J, Delmas P. Recording of mechanosensitive currents using piezoelectrically driven mechanostimulator. Nat Protoc. 2011;6:979–90. doi: 10.1038/nprot.2011.343. [PubMed] [Cross Ref]
67. Rugiero F, Drew LJ, Wood JN. Kinetic properties of mechanically activated currents in spinal sensory neurons. J Physiol. 2010;588:301–14. doi: 10.1113/jphysiol.2009.182360. [PMC free article] [PubMed] [Cross Ref]
68. Hu J, Lewin GR. Mechanosensitive currents in the neurites of cultured mouse sensory neurones. J Physiol. 2006;577:815–28. doi: 10.1113/jphysiol.2006.117648. [PMC free article] [PubMed] [Cross Ref]
69. Bhattacharya MR, Bautista DM, Wu K, Haeberle H, Lumpkin EA, Julius D. Radial stretch reveals distinct populations of mechanosensitive mammalian somatosensory neurons. Proc Natl Acad Sci U S A. 2008;105:20015–20. doi: 10.1073/pnas.0810801105. [PMC free article] [PubMed] [Cross Ref]
70. Crawford AC, Evans MG, Fettiplace R. Activation and adaptation of transducer currents in turtle hair cells. J Physiol. 1989;419:405–34. [PMC free article] [PubMed]
71. Ricci AJ, Wu YC, Fettiplace R. The endogenous calcium buffer and the time course of transducer adaptation in auditory hair cells. J Neurosci. 1998;18:8261–77. [PubMed]
72. Vollrath MA, Kwan KY, Corey DP. The micromachinery of mechanotransduction in hair cells. Annu Rev Neurosci. 2007;30:339–65. doi: 10.1146/annurev.neuro.29.051605.112917. [Review] [PMC free article] [PubMed] [Cross Ref]
73. Goodman MB, Schwarz EM. Transducing touch in Caenorhabditis elegans. Annu Rev Physiol. 2003;65:429–52. doi: 10.1146/annurev.physiol.65.092101.142659. [Review] [PubMed] [Cross Ref]
74. Waldmann R, Lazdunski MH. H(+)-gated cation channels: neuronal acid sensors in the NaC/DEG family of ion channels. Curr Opin Neurobiol. 1998;8:418–24. doi: 10.1016/S0959-4388(98)80070-6. [Review] [PubMed] [Cross Ref]
75. Page AJ, Brierley SM, Martin CM, Martinez-Salgado C, Wemmie JA, Brennan TJ, et al. The ion channel ASIC1 contributes to visceral but not cutaneous mechanoreceptor function. Gastroenterology. 2004;127:1739–47. doi: 10.1053/j.gastro.2004.08.061. [PubMed] [Cross Ref]
76. Price MP, McIlwrath SL, Xie J, Cheng C, Qiao J, Tarr DE, et al. The DRASIC cation channel contributes to the detection of cutaneous touch and acid stimuli in mice. Neuron. 2001;32:1071–83. doi: 10.1016/S0896-6273(01)00547-5. [Erratum in: Neuron 2002 Jul 18;35] [PubMed] [Cross Ref]
77. Roza C, Puel JL, Kress M, Baron A, Diochot S, Lazdunski M, et al. Knockout of the ASIC2 channel in mice does not impair cutaneous mechanosensation, visceral mechanonociception and hearing. J Physiol. 2004;558:659–69. doi: 10.1113/jphysiol.2004.066001. [PMC free article] [PubMed] [Cross Ref]
78. Damann N, Voets T, Nilius B. TRPs in our senses. Curr Biol. 2008;18:R880–9. doi: 10.1016/j.cub.2008.07.063. [Review] [PubMed] [Cross Ref]
79. Christensen AP, Corey DP. TRP channels in mechanosensation: direct or indirect activation? Nat Rev Neurosci. 2007;8:510–21. doi: 10.1038/nrn2149. [Review] [PubMed] [Cross Ref]
80. Liedtke W, Tobin DM, Bargmann CI, Friedman JM. Mammalian TRPV4 (VR-OAC) directs behavioral responses to osmotic and mechanical stimuli in Caenorhabditis elegans. Proc Natl Acad Sci U S A. 2003;100(Suppl 2):14531–6. doi: 10.1073/pnas.2235619100. [PMC free article] [PubMed] [Cross Ref]
81. Suzuki M, Mizuno A, Kodaira K, Imai M. Impaired pressure sensation in mice lacking TRPV4. J Biol Chem. 2003;278:22664–8. doi: 10.1074/jbc.M302561200. [PubMed] [Cross Ref]
82. Liedtke W, Choe Y, Martí-Renom MA, Bell AM, Denis CS, Sali A, et al. Vanilloid receptor-related osmotically activated channel (VR-OAC), a candidate vertebrate osmoreceptor. Cell. 2000;103:525–35. doi: 10.1016/S0092-8674(00)00143-4. [PMC free article] [PubMed] [Cross Ref]
83. Alessandri-Haber N, Dina OA, Yeh JJ, Parada CA, Reichling DB, Levine JD. Transient receptor potential vanilloid 4 is essential in chemotherapy-induced neuropathic pain in the rat. J Neurosci. 2004;24:4444–52. doi: 10.1523/JNEUROSCI.0242-04.2004. [Erratum in: J Neurosci. 2004 Jun;24] [PubMed] [Cross Ref]
84. Bautista DM, Jordt SE, Nikai T, Tsuruda PR, Read AJ, Poblete J, et al. TRPA1 mediates the inflammatory actions of environmental irritants and proalgesic agents. Cell. 2006;124:1269–82. doi: 10.1016/j.cell.2006.02.023. [PubMed] [Cross Ref]
85. Kwan KY, Allchorne AJ, Vollrath MA, Christensen AP, Zhang DS, Woolf CJ, et al. TRPA1 contributes to cold, mechanical, and chemical nociception but is not essential for hair-cell transduction. Neuron. 2006;50:277–89. doi: 10.1016/j.neuron.2006.03.042. [PubMed] [Cross Ref]
86. Coste B, Mathur J, Schmidt M, Earley TJ, Ranade S, Petrus MJ, et al. Piezo1 and Piezo2 are essential components of distinct mechanically activated cation channels. Science. 2010;330:55–60. doi: 10.1126/science.1193270. [PMC free article] [PubMed] [Cross Ref]
87. Coste B, Xiao B, Santos JS, Syeda R, Grandl J, Spencer KS, et al. Piezo proteins are pore-forming subunits of mechanically activated channels. Nature. 2012;483:176–81. doi: 10.1038/nature10812. [PMC free article] [PubMed] [Cross Ref]
88. Bae C, Sachs F, Gottlieb PA. The mechanosensitive ion channel Piezo1 is inhibited by the peptide GsMTx4. Biochemistry. 2011;50:6295–300. doi: 10.1021/bi200770q. [PMC free article] [PubMed] [Cross Ref]
89. Kim SE, Coste B, Chadha A, Cook B, Patapoutian A. The role of Drosophila Piezo in mechanical nociception. Nature. 2012;483:209–12. doi: 10.1038/nature10801. [PMC free article] [PubMed] [Cross Ref]
90. Zarychanski R, Schulz VP, Houston BL, Maksimova Y, Houston DS, Smith B, et al. Mutations in the mechanotransduction protein PIEZO1 are associated with hereditary xerocytosis. Blood. 2012;120:1908–15. doi: 10.1182/blood-2012-04-422253. [PMC free article] [PubMed] [Cross Ref]
91. Kawashima Y, Géléoc GS, Kurima K, Labay V, Lelli A, Asai Y, et al. Mechanotransduction in mouse inner ear hair cells requires transmembrane channel-like genes. J Clin Invest. 2011;121:4796–809. doi: 10.1172/JCI60405. [PMC free article] [PubMed] [Cross Ref]
92. Tlili A, Rebeh IB, Aifa-Hmani M, Dhouib H, Moalla J, Tlili-Chouchène J, et al. TMC1 but not TMC2 is responsible for autosomal recessive nonsyndromic hearing impairment in Tunisian families. Audiol Neurootol. 2008;13:213–8. doi: 10.1159/000115430. [PubMed] [Cross Ref]
93. Manji SS, Miller KA, Williams LH, Dahl HH. Identification of three novel hearing loss mouse strains with mutations in the Tmc1 gene. Am J Pathol. 2012;180:1560–9. doi: 10.1016/j.ajpath.2011.12.034. [PubMed] [Cross Ref]
94. Wetzel C, Hu J, Riethmacher D, Benckendorff A, Harder L, Eilers A, et al. A stomatin-domain protein essential for touch sensation in the mouse. Nature. 2007;445:206–9. doi: 10.1038/nature05394. [PubMed] [Cross Ref]
95. Martinez-Salgado C, Benckendorff AG, Chiang LY, Wang R, Milenkovic N, Wetzel C, et al. Stomatin and sensory neuron mechanotransduction. J Neurophysiol. 2007;98:3802–8. doi: 10.1152/jn.00860.2007. [PubMed] [Cross Ref]
96. Huang M, Gu G, Ferguson EL, Chalfie M. A stomatin-like protein necessary for mechanosensation in C. elegans. Nature. 1995;378:292–5. doi: 10.1038/378292a0. [PubMed] [Cross Ref]
97. Hu J, Chiang LY, Koch M, Lewin GR. Evidence for a protein tether involved in somatic touch. EMBO J. 2010;29:855–67. doi: 10.1038/emboj.2009.398. [PMC free article] [PubMed] [Cross Ref]
98. Chiang LY, Poole K, Oliveira BE, Duarte N, Sierra YA, Bruckner-Tuderman L, et al. Laminin-332 coordinates mechanotransduction and growth cone bifurcation in sensory neurons. Nat Neurosci. 2011;14:993–1000. doi: 10.1038/nn.2873. [PubMed] [Cross Ref]
99. Lesage F, Guillemare E, Fink M, Duprat F, Lazdunski M, Romey G, et al. TWIK-1, a ubiquitous human weakly inward rectifying K+ channel with a novel structure. EMBO J. 1996;15:1004–11. [PMC free article] [PubMed]
100. Lesage F. Pharmacology of neuronal background potassium channels. Neuropharmacology. 2003;44:1–7. doi: 10.1016/S0028-3908(02)00339-8. [Review] [PubMed] [Cross Ref]
101. Medhurst AD, Rennie G, Chapman CG, Meadows H, Duckworth MD, Kelsell RE, et al. Distribution analysis of human two pore domain potassium channels in tissues of the central nervous system and periphery. Brain Res Mol Brain Res. 2001;86:101–14. doi: 10.1016/S0169-328X(00)00263-1. [PubMed] [Cross Ref]
102. Maingret F, Patel AJ, Lesage F, Lazdunski M, Honoré E. Mechano- or acid stimulation, two interactive modes of activation of the TREK-1 potassium channel. J Biol Chem. 1999;274:26691–6. doi: 10.1074/jbc.274.38.26691. [PubMed] [Cross Ref]
103. Maingret F, Fosset M, Lesage F, Lazdunski M, Honoré E. TRAAK is a mammalian neuronal mechano-gated K+ channel. J Biol Chem. 1999;274:1381–7. doi: 10.1074/jbc.274.3.1381. [PubMed] [Cross Ref]
104. Alloui A, Zimmermann K, Mamet J, Duprat F, Noël J, Chemin J, et al. TREK-1, a K+ channel involved in polymodal pain perception. EMBO J. 2006;25:2368–76. doi: 10.1038/sj.emboj.7601116. [PMC free article] [PubMed] [Cross Ref]
105. Noël J, Zimmermann K, Busserolles J, Deval E, Alloui A, Diochot S, et al. The mechano-activated K+ channels TRAAK and TREK-1 control both warm and cold perception. EMBO J. 2009;28:1308–18. doi: 10.1038/emboj.2009.57. [PMC free article] [PubMed] [Cross Ref]
106. Dobler T, Springauf A, Tovornik S, Weber M, Schmitt A, Sedlmeier R, et al. TRESK two-pore-domain K+ channels constitute a significant component of background potassium currents in murine dorsal root ganglion neurones. J Physiol. 2007;585:867–79. doi: 10.1113/jphysiol.2007.145649. [PMC free article] [PubMed] [Cross Ref]
107. Bautista DM, Sigal YM, Milstein AD, Garrison JL, Zorn JA, Tsuruda PR, et al. Pungent agents from Szechuan peppers excite sensory neurons by inhibiting two-pore potassium channels. Nat Neurosci. 2008;11:772–9. doi: 10.1038/nn.2143. [PMC free article] [PubMed] [Cross Ref]
108. Lennertz RC, Tsunozaki M, Bautista DM, Stucky CL. Physiological basis of tingling paresthesia evoked by hydroxy-alpha-sanshool. J Neurosci. 2010;30:4353–61. doi: 10.1523/JNEUROSCI.4666-09.2010. [PMC free article] [PubMed] [Cross Ref]
109. Heidenreich M, Lechner SG, Vardanyan V, Wetzel C, Cremers CW, De Leenheer EM, et al. KCNQ4 K(+) channels tune mechanoreceptors for normal touch sensation in mouse and man. Nat Neurosci. 2012;15:138–45. doi: 10.1038/nn.2985. [PubMed] [Cross Ref]
110. Frenzel H, Bohlender J, Pinsker K, Wohlleben B, Tank J, Lechner SG, et al. A genetic basis for mechanosensory traits in humans. PLoS Biol. 2012;10:e1001318. doi: 10.1371/journal.pbio.1001318. [PMC free article] [PubMed] [Cross Ref]
111. Delmas P, Hao J, Rodat-Despoix L. Molecular mechanisms of mechanotransduction in mammalian sensory neurons. Nat Rev Neurosci. 2011;12:139–53. doi: 10.1038/nrn2993. [PubMed] [Cross Ref]
Additional Topics: Back Pain
Back pain is one of the most prevalent causes for disability and missed days at work worldwide. As a matter of fact, back pain has been attributed as the second most common reason for doctor office visits, outnumbered only by upper-respiratory infections. Approximately 80 percent of the population will experience some type of back pain at least once throughout their life. The spine is a complex structure made up of bones, joints, ligaments and muscles, among other soft tissues. Because of this, injuries and/or aggravated conditions, such as herniated discs, can eventually lead to symptoms of back pain. Sports injuries or automobile accident injuries are often the most frequent cause of back pain, however, sometimes the simplest of movements can have painful results. Fortunately, alternative treatment options, such as chiropractic care, can help ease back pain through the use of spinal adjustments and manual manipulations, ultimately improving pain relief.
|
|— Type —|
A specimen of the NWA 869 chondrite (type L4-6), showing chondrules and metal flakes
|Parent body||Small to medium asteroids that were never part of a body large enough to undergo melting and planetary differentiation.|
|Total known specimens||Over 27,000|
Chondrites are stony (non-metallic) meteorites that have not been modified due to melting or differentiation of the parent body. They are formed when various types of dust and small grains that were present in the early solar system accreted to form primitive asteroids. They are the most common type of meteorite that falls to Earth with estimates for the proportion of the total fall that they represent varying between 85.7% and 86.2%. Their study provides important clues for understanding the origin and age of the Solar System, the synthesis of organic compounds, the origin of life or the presence of water on Earth. One of their characteristics is the presence of chondrules, which are round grains formed by distinct minerals, that normally constitute between 20% and 80% of a chondrite by volume.
There are currently over 27,000 chondrites in the world's collections. The largest individual stone ever recovered, weighing 1770 kg, was part of the Jilin meteorite shower of 1976. Chondrite falls range from single stones to extraordinary showers consisting of thousands of individual stones, as occurred in the Holbrook fall of 1912, where an estimated 14,000 stones rained down on northern Arizona.
- 1 Origin and history
- 2 Characteristics
- 3 Chondrite classification
- 4 Composition
- 5 Petrologic types
- 6 Presence of water
- 7 Origin of life
- 8 See also
- 9 References
- 10 External links
Origin and history
Chondrites were formed by the accretion of particles of dust and grit present in the primitive Solar System which gave rise to asteroids over 4.55 billion years ago. These asteroid parent bodies of chondrites are (or were) small to medium-sized asteroids that were never part of any body large enough to undergo melting and planetary differentiation. Dating using 206Pb/204Pb gives an estimated age of 4,566.6 ± 1.0 Ma, matching ages for other chronometers. Another indication of their age is the fact that the abundance of non-volatile elements in chondrites is similar to that found in the atmosphere of the Sun and other stars in our galaxy. Although chondritic asteroids never became hot enough to melt based upon internal temperatures, many of them reached high enough temperatures that they experienced significant thermal metamorphism in their interiors. The source of the heat was most likely energy coming from the decay of short-lived radioisotopes (half-lives less than a few million years) that were present in the newly formed solar system, especially 26Al and 60Fe, although heating may have been caused by impacts onto the asteroids as well. Many chondritic asteroids also contained significant amounts of water, possibly due to the accretion of ice along with rocky material. As a result, many chondrites contain hydrous minerals, such as clays, that formed when the water interacted with the rock on the asteroid in a process known as aqueous alteration. In addition, all chondritic asteroids were affected by impact and shock processes due to collisions with other asteroids. These events caused a variety of effects, ranging from simple compaction to brecciation, veining, localized melting, and formation of high-pressure minerals. The net result of these secondary thermal, aqueous, and shock processes is that only a few known chondrites preserve in pristine form the original dust, chondrules, and inclusions from which they formed.
Prominent among the components present in chondrites are the enigmatic chondrules, millimetre-sized spherical objects that originated as freely floating, molten or partially molten droplets in space; most chondrules are rich in the silicate minerals olivine and pyroxene. Chondrites also contain refractory inclusions (including Ca-Al Inclusions), which are among the oldest objects to form in the solar system, particles rich in metallic Fe-Ni and sulfides, and isolated grains of silicate minerals. The remainder of chondrites consists of fine-grained (micrometre-sized or smaller) dust, which may either be present as the matrix of the rock or may form rims or mantles around individual chondrules and refractory inclusions. Embedded in this dust are presolar grains, which predate the formation of our solar system and originated elsewhere in the galaxy. The chondrules have distinct texture, composition and mineralogy and their origin continues to be the object of some debate. The scientific community generally accepts that these spheres were formed by the action of a shock wave that passed through the Solar System, although there is little agreement as to the cause of this shock wave. An article published in 2005 proposed that the gravitational instability of the gaseous disk that formed Jupiter generated a shock wave with a velocity of more than 10 km/s, which resulted in the formation of the chondrules.
Chondrites are divided into about 15 distinct groups (see Meteorites classification) on the basis of their mineralogy, bulk chemical composition, and oxygen isotope compositions (see below). The various chondrite groups likely originated on separate asteroids or groups of related asteroids. Each chondrite group has a distinctive mixture of chondrules, refractory inclusions, matrix (dust), and other components and a characteristic grain size. Other ways of classifying chondrites include weathering and shock.
Chondrites can also be categorized according to their petrologic type, which is the degree to which they were thermally metamorphosed or aqueously altered (they are assigned a number between 1 and 7). The chondrules in a chondrite that is assigned a "3" have not been altered. Larger numbers indicate an increase in thermal metamorphosis up to a maximum of 7, where the chondrules have been destroyed. Numbers lower than 3 are given to chondrites whose chondrules have been changed by the presence of water, down to 1, where the chondrules have been obliterated by this alteration.
A synthesis of the various classification schemes is provided in the table below.
|Type||Subtype||Distinguishing features/Chondrule character||Letter designation|
|Enstatite chondrites||Abundant||E3, EH3, EL3|
|Distinct||E4, EH4, EL4|
|Less distinct||E5, EH5, EL5|
|Indistinct||E6, EH6, EL6|
|Melted||E7, EH7, EL7|
|Carbonaceous chondrites||Ivuna||Phylosilicates, Magnetite||CI|
|Vigarano||Olivines rich in Fe, Ca minerals and Al||CV2-CV3.3|
|Renazzo||Phylosilicates, Olivine, Pyroxene, metals||CR|
|Ornans||Olivine, Pyroxene, metals, Ca minerals and Al||CO3-CO3.7|
|Karoonda||Olivine, Ca minerals and Al||CK|
|High Iron||Pyroxene, metals, Olivine||CH|
|Rumurutiites||Olivine, Pyroxenes, Plagioclase, Sulfides||R|
Enstatite chondrites (also known as E-type chondrites) are a rare form of meteorite thought to comprise only about 2% of the chondrites that fall to Earth. Only about 200 E-Type chondrites are currently known. The majority of enstatite chondrites have either been recovered in Antarctica or have been collected by the American National Weather Association. They tend to be high in the mineral enstatite (MgSiO3), from which they derive their name. E-type chondrites are among the most chemically reduced rocks known, with most of their iron taking the form of metal or sulfide rather than as an oxide. This suggests that they were formed in an area that lacked oxygen, probably within the orbit of Mercury.
Ordinary chondrites are by far the most common type of meteorite to fall to Earth: about 80% of all meteorites and over 90% of chondrites are ordinary chondrites. They contain abundant chondrules, sparse matrix (10–15% of the rock), few refractory inclusions, and variable amounts of Fe-Ni metal and troilite (FeS). Their chondrules are generally in the range of 0.5 to 1 mm in diameter. Ordinary chondrites are distinguished chemically by their depletions in refractory lithophile elements, such as Ca, Al, Ti, and rare earths, relative to Si, and isotopically by their unusually high 17O/16O ratios relative to 18O/16O compared to Earth rocks. Most, but not all, ordinary chondrites have experienced significant degrees of metamorphism, having reached temperatures well above 500 °C on the parent asteroids. They are divided into three groups, which have different amounts of metal and different amounts of total iron:
- H chondrite have High total iron and high metallic Fe (15–20% Fe-Ni metal by mass), and smaller chondrules than L and LL chondrites. They are formed of bronzite, olivine, pyroxene, plagioclase, metals and sulfides and ~42% of ordinary chondrite falls belong to this group (see Meteorite fall statistics).
- L chondrites have Low total iron contents (including 7–11% Fe-Ni metal by mass). ~46% of ordinary chondrite falls belong to this group, which makes them the most common type of meteorite to fall on Earth.
- LL chondrites have Low total iron and Low metal contents (3–5% Fe-Ni metal by mass of which 2% is metallic Fe and they also contain bronzite, oligoclase and olivine.). Only 1 in 10 ordinary chondrite falls belong to this group.
An example of this group is the NWA 869 meteorite.
Carbonaceous chondrites (also known as C-type chondrites) make up less than 5% of the chondrites that fall on Earth. They are characterized by the presence of carbon compounds, including amino acids. They are thought to have been formed the farthest from the sun of any of the chondrites as they have the highest proportion of volatile compounds. Another of their main characteristics is the presence of water or of minerals that have been altered by the presence of water.
There are many groups of carbonaceous chondrites, but most of them are distinguished chemically by enrichments in refractory lithophile elements relative to Si and isotopically by unusually low 17O/16O ratios relative to 18O/16O compared to Earth rocks. All groups of carbonaceous chondrites except the CH group are named for a characteristic type specimen:
- CI (Ivuna type) chondrites entirely lack chondrules and refractory inclusions; they are composed almost exclusively of fine-grained material that has experienced a high degree of aqueous alteration on the parent asteroid. CI chondrites are highly oxidized, brecciated rocks, containing abundant magnetite and sulfate minerals, and lacking metallic Fe. It is a matter of some controversy whether they once had chondrules and refractory inclusions that were later destroyed during formation of hydrous minerals, or they never had chondrules in the first place. CI chondrites are notable because their chemical compositions closely resemble that of the solar photosphere, neglecting the hydrogen and helium. Thus, they have the most "primitive" compositions of any meteorites and are often used as a standard for assessing the degree of chemical fractionation experienced by materials formed throughout the solar system.
- CO (Ornans type) and CM (Mighei type) chondrites are two related groups that contain very small chondrules, mostly 0.1 to 0.3 mm in diameter; refractory inclusions are quite abundant and have similar sizes to chondrules.
- CM chondrites are composed of about 70% fine-grained material (matrix), and most have experienced extensive aqueous alteration. The much studied Murchison meteorite, which fell in Australia in 1969, is the best-known member of this group.
- CO chondrites have only about 30% matrix and have experienced very little aqueous alteration. Most have experienced small degrees of thermal metamorphism.
- CR (Renazzo type), CB (Bencubbin type), and CH (high metal) carbonaceous chondrites are three groups that seem to be related by their chemical and oxygen isotopic compositions. All are rich in metallic Fe-Ni, with CH and especially CB chondrites having a higher proportion of metal than all other chondrite groups. Although CR chondrites are clearly similar in most ways to other chondrite groups, the origins of CH and CB chondrites are somewhat controversial. Some workers conclude that many of the chondrules and metal grains in these chondrites may have formed by impact processes after "normal" chondrules had already formed, and thus they may not be "true" chondrites.
- CR chondrites have chondrules that are similar in size to those in ordinary chondrites (near 1 mm), few refractory inclusions, and matrix comprises nearly half the rock. Many CR chondrites have experienced extensive aqueous alteration, but some have mostly escaped this process.
- CH chondrites are remarkable for their very tiny chondrules, typically only about 0.02 mm (20 micrometres) in diameter. They have a small proportion of equally tiny refractory inclusions. Dusty material occurs as discrete clasts, rather than as a true matrix. CH chondrites are also distinguished by extreme depletions in volatile elements.
- CB chondrites occur in two types, both of which are similar to CH chondrites in that they are very depleted in volatile elements and rich in metal. CBa (subgroup a) chondrites are coarse grained, with large, often cm-sized chondrules and metal grains and almost no refractory inclusions. Chondrules have unusual textures compared to most other chondrites. As in CH chondrites, dusty material only occurs in discrete clasts and there is no fine-grained matrix. CBb (subgroup b) chondrites contain much smaller (mm-sized) chondrules and do contain refractory inclusions.
- CV (Vigarano type) chondrites are characterized by mm-sized chondrules and abundant refractory inclusions set in a dark matrix that comprises about half the rock. CV chondrites are noted for spectacular refractory inclusions, some of which reach centimetre sizes, and they are the only group to contain a distinctive type of large, once-molten inclusions. Chemically, CV chondrites have the highest abundances of refractory lithophile elements of any chondrite group. The CV group includes the remarkable Allende fall in Mexico in 1969, which became one of the most widely distributed and, certainly, the best-studied meteorite in history.
- CK (Karoonda type) chondrites are chemically and texturally similar to CV chondrites. However, they contain far fewer refractory inclusions than CV, they are much more oxidized rocks, and most of them have experienced considerable amounts of thermal metamorphism (compared to CV and all other groups of carbonaceous chondrites).
- Ungrouped carbonaceous chondrites: A number of chondrites are clearly members of the carbonaceous chondrite class, but do not fit into any of the groups. These include: the Tagish Lake meteorite, which fell in Canada in 2000 and is intermediate between CI and CM chondrites; Coolidge and Loongana 001, which form a grouplet that may be related to CV chondrites; and Acfer 094, an extremely primitive chondrite that shares properties with both CM and CO groups.
Three chondrites form what is known as the K (Kakangari type) grouplet, They are characterized by large amounts of dusty matrix and oxygen isotope compositions similar to carbonaceous chondrites, highly reduced mineral compositions and high metal abundances (6% to 10% by volume) that are most like enstatite chondrites, and concentrations of refractory lithophile elements that are most like ordinary chondrites.
Many of their other characteristics are similar to the O, E and C chondrites.
R (Rumuruti type) chondrites are a very rare group, with only one documented fall out of almost 900 documented chondrite falls. They have a number of properties in common with ordinary chondrites, including similar types of chondrules, few refractory inclusions, similar chemical composition for most elements, and the fact that 17O/16O ratios are anomalously high compared to Earth rocks. However, there are significant differences between R chondrites and ordinary chondrites: R chondrites have much more dusty matrix material (about 50% of the rock); they are much more oxidized, containing little metallic Fe-Ni; and their enrichments in 17O are higher than those of ordinary chondrites. Nearly all the metal they contain is oxidized or in the form of sulfides. They contain fewer chondrules than the E chondrites, and appear to come from an asteroid's regolith.
Because chondrites accumulated from material that formed very early in the history of the solar system, and because chondritic asteroids did not melt, they have very primitive compositions. "Primitive," in this sense, means that the abundances of most chemical elements do not differ greatly from those that are measured by spectroscopic methods in the photosphere of the sun, which in turn should be well-representative of the entire solar system (note: to make such a comparison between a gaseous object like the sun and a rock like a chondrite, scientists choose one rock-forming element, such as silicon, to use as a reference point, and then compare ratios. Thus, the atomic ratio of Mg/Si measured in the sun (1.07) is identical to that measured in CI chondrites).
Although all chondrite compositions can be considered primitive, there is variation among the different groups, as discussed above. CI chondrites seem to be nearly identical in composition to the sun for all but the gas-forming elements (e.g., hydrogen, carbon, nitrogen, and noble gases). Other chondrite groups deviate from the solar composition (i.e., they are fractionated) in highly systematic ways:
- At some point during the formation of many chondrites, particles of metal became partially separated from particles of silicate minerals. As a result, chondrites coming from asteroids that did not accrete with their full complement of metal (e.g., L, LL, and EL chondrites) are depleted in all siderophile elements, whereas those that accreted too much metal (e.g., CH, CB, and EH chondrites) are enriched in these elements compared to the sun.
- In a similar manner, although the exact process is not very well understood, highly refractory elements like Ca and Al became separated from less refractory elements like Mg and Si, and were not uniformly sampled by each asteroid. The parent bodies of many groups of carbonaceous chondrites contain over-sampled grains rich in refractory elements, whereas those of ordinary and enstatite chondrites were deficient in them.
- No chondrites except the CI group formed with a full, solar complement of volatile elements. In general, the level of depletion corresponds to the degree of volatility, where the most volatile elements are most depleted.
A chondrite's group is determined by its primary chemical, mineralogical, and isotopic characteristics (above). The degree to which it has been affected by the secondary processes of thermal metamorphism and aqueous alteration on the parent asteroid is indicated by its petrologic type, which appears as a number following the group name (e.g., an LL5 chondrite belongs to the LL group and has a petrologic type of 5). The current scheme for describing petrologic types was devised by Van Schmus and Wood in 1967.
The petrologic-type scheme originated by Van Schmus and Wood is really two separate schemes, one describing aqueous alteration (types 1–2) and one describing thermal metamorphism (types 3–6). The aqueous alteration part of the system works as follows:
- Type 1 was originally used to designate chondrites that lacked chondrules and contained large amounts of water and carbon. Current usage of type 1 is simply to indicate meteorites that have experienced extensive aqueous alteration, to the point that most of their olivine and pyroxene have been altered to hydrous phases. This alteration took place at temperatures of 50 to 150 °C, so type 1 chondrites were warm, but not hot enough to experience thermal metamorphism. The members of the CI group, plus a few highly altered carbonaceous chondrites of other groups, are the only instances of type 1 chondrites.
- Type 2 chondrites are those that have experienced extensive aqueous alteration, but still contain recognizable chondrules as well as primary, unaltered olivine and/or pyroxene. The fine-grained matrix is generally fully hydrated and minerals inside chondrules may show variable degrees of hydration. This alteration probably occurred at temperatures below 20 °C, and again, these meteorites are not thermally metamorphosed. Almost all CM and CR chondrites are petrologic type 2; with the exception of some ungrouped carbonaceous chondrites, no other chondrites are type 2.
The thermal metamorphism part of the scheme describes a continuous sequence of changes to mineralogy and texture that accompany increasing metamorphic temperatures. These chondrites show little evidence of the effects of aqueous alteration:
- Type 3 chondrites show low degrees of metamorphism. They are often referred to as unequilibrated chondrites because minerals such as olivine and pyroxene show a wide range of compositions, reflecting formation under a wide variety of conditions in the solar nebula. (Type 1 and 2 chondrites are also unequilibrated.) Chondrites that remain in nearly pristine condition, with all components (chondrules, matrix, etc.) having nearly the same composition and mineralogy as when they accreted to the parent asteroid, are designated type 3.0. As petrologic type increases from type 3.1 through 3.9, profound mineralogical changes occur, starting in the dusty matrix, and then increasingly affecting the coarser-grained components like chondrules. Type 3.9 chondrites still look superficially unchanged because chondrules retain their original appearances, but all of the minerals have been affected, mostly due to diffusion of elements between grains of different composition.
- Types 4, 5, and 6 chondrites have been increasingly altered by thermal metamorphism. These are equilibrated chondrites, in which the compositions of most minerals have become quite homogeneous due to high temperatures. By type 4, the matrix has thoroughly recrystallized and coarsened in grain size. By type 5, chondrules begin to become indistinct and matrix cannot be discerned. In type 6 chondrites, chondrules begin to integrate with what was once matrix, and small chondrules may no longer be recognizable. As metamorphism proceeds, many minerals coarsen and new, metamorphic minerals such as feldspar form.
Some workers have extended the Van Schmus and Wood metamorphic scheme to include a type 7, although there is not consensus on whether this is necessary. Type 7 chondrites have experienced the highest temperatures possible, short of that required to produce melting. Should the onset of melting occur the meteorite would probably be classified as a primitive achondrite instead of a chondrite.
All groups of ordinary and enstatite chondrites, as well as R and CK chondrites, show the complete metamorphic range from type 3 to 6. CO chondrites comprise only type 3 members, although these span a range of petrologic types from 3.0 to 3.8.
Presence of water
These meteorites either contain a proportion of water or minerals that have been altered by water. This suggests that the asteroid from which these meteorites originate must have contained water. At the beginning of the Solar System this would have been present as ice and a few million years after the asteroid formed the ice would have melted allowing the liquid water to react with and alter the olivines and pyroxenes. The formation of rivers and lakes on the asteroid is thought to have been unlikely if it was sufficiently porous to allow the water to percolated towards its interior, as occurs in terrestrial aquifers.
Origin of life
Carbonaceous chondrites contain more than 600 organic compounds that were synthesized in distinct places and at distinct times. These organic compounds include: hydrocarbons, carboxylic acids, alcohols, ketones, aldehydes, amines, amides, sulfonic acids, phosphonic acids, amino acids, nitrogenous bases, etc. These compounds can be divided into three main groups: a fraction that is not soluble in chloroform or methanol, chloroform soluble hydrocarbons and a fraction that is soluble in methanol (which includes the amino acids).
The first fraction appears to originate from interstellar space and the compounds belonging to the other fractions derive from a planetoid. It has been proposed that the amino acids were synthesized close to the surface of a planetoid by the radiolysis (dissociation of molecules caused by radiation) of hydrocarbons and ammonium carbonate in the presence of liquid water. In addition, the hydrocarbons could have formed deep within a planetoid by a process similar to the Fischer-Tropsch process. These conditions could be analogous to the events that caused the origin of life on Earth.
The Murchison meteorite has been thoroughly studied, it fell in Australia close to the town that bears its name on 28 September 1969. It is a CM2 and it contains common amino acids such as glycine, alanine and glutamic acid as well as other less common ones such as isovaline and pseudo-leucine.
Two meteorites that were collected in Antarctica in 1992 and 1995 were found to be abundant in amino acids, which are present at concentrations of 180 and 249 ppm (carbonaceous chondrites normally contain concentrations of 15 ppm or less). This could indicate that organic material is more abundant in the Solar System than was previously believed, and it reinforces the idea that the organic compounds present in the primordial soup could have had an extraterrestrial origin.
- "2.2 La composición de la Tierra: el modelo condrítico in Planetología. Universidad Complutense de Madrid". Retrieved 19 May 2012.
- The use of the term non-metallic does not imply the total absence of metals.
- Calvin J. Hamilton (Translated from English by Antonio Bello). "Meteoroides y Meteoritos" (in Spanish). Retrieved 2009-04-18.
- Bischoff, A.; Geiger, T. (1995). "Meteorites for the Sahara: Find locations, shock classification, degree of weathering and pairing". Meteoritics. 30 (1): 113–122. Bibcode:1995Metic..30..113B. doi:10.1111/j.1945-5100.1995.tb01219.x. ISSN 0026-1114.
- Axxón. "Pistas químicas apuntan a un origen de polvo para los planetas terrestres" (in Spanish). Retrieved 11 May 2009.
- Jordi, Llorca Pique (2004). "Nuestra historia en los meteoritos". El sistema solar: Nuestro pequeño rincón en la vía láctea. Universitat Jaume I. p. 75. ISBN 848021466X.
- Amelin, Yuri; Krot, Alexander (2007). "Pb isotopic age of the Allende chondrules". Meteoritics & Planetary Science. 42 (7/8): 1043–1463. Bibcode:2007M&PS...42.1043F. doi:10.1111/j.1945-5100.2007.tb00559.x. Retrieved 2009-07-13.
- Wood, J.A. (1988). "Chondritic Meteorites and the Solar Nebula". Annual Review of Earth and Planetary Sciences. 16: 53–72. Bibcode:1988AREPS..16...53W. doi:10.1146/annurev.ea.16.050188.000413. 0084-6597, 53–72.
- "Bjurböle; Meteoritical Bulletin Database. The Meteoritical Society". Retrieved 6 March 2013.
- "Grassland; Meteoritical Bulletin Database. The Meteoritical Society". Retrieved 6 March 2013.
- Múñoz-Espadas, M.J.; Martínez-Frías, J.; Lunar, R. (2003). "Mineralogía, texturas y cosmoquímica de cóndrulos RP y PO en la condrita Reliegos L5 (León, España)". Geogaceta (in Spanish). 34. 0213-683X, 35–38.
- Astrobiology Magazine. "¿Cocinó Júpiter a los meteoritos?" (in Spanish). Archived from the original on 19 April 2007. Retrieved 18 April 2009.
- Boss, A.P.; Durisen, R.H. (2005). "Chondrule-forming Shock Fronts in the Solar Nebula: A Possible Unified Scenario for Planet and Chondrite Formation". The Astrophysical Journal. 621 (2): L137–L140. arXiv:astro-ph/0501592. Bibcode:2005ApJ...621L.137B. doi:10.1086/429160.
- Van Schmus, W. R.; Wood, J. A. (1967). "A chemical-petrologic classification for the chondritic meteorites". Geochimica et Cosmochimica Acta. 31 (5): 747–765. Bibcode:1967GeCoA..31..747V. doi:10.1016/S0016-7037(67)80030-9.
- Clayton, R. N.; Mayeda, T. K. (1989), "Oxygen Isotope Classification of Carbonaceous Chondrites", Abstracts of the Lunar and Planetary Science Conference, 20: 169, Bibcode:1989LPI....20..169C
- Wlotzka, F. (Jul 1993), "A Weathering Scale for the Ordinary Chondrites", Meteoritics, 28: 460, Bibcode:1993Metic..28Q.460W
- Stöffler, Dieter; Keil, Klaus; Edward R.D, Scott (Dec 1991). "Shock metamorphism of ordinary chondrites". Geochimica et Cosmochimica Acta. 55 (12): 3845–3867. Bibcode:1991GeCoA..55.3845S. doi:10.1016/0016-7037(91)90078-J.
- The Meteorite Market. "Types of Meteorites". Retrieved 2009-04-18.
- The E stands for Enstatite, H indicates a high metallic iron content of approximately 30%, and L low. The number refers to alteration.
- Except for the High Iron, all the other carbonaceous chondrites are named after a characteristic meteorite.
- Norton, O.R. and Chitwood, L.A. Field Guide to Meteors and Meteorites, Springer-Verlag, London 2008
- New England Meteoritical Services. "Meteorlab". Retrieved 22 April 2009.
- "metal, iron, & nickel in meteorites 1". meteorites.wustl.edu.
- The Internet Encyclopedia of Science. "carbonaceous chondrite". Retrieved 26 April 2009.
- Aaron S. Burton; Jamie E. Elsila; Jason E. Hein; Daniel P. Glavin; Jason P. Dworkin (March 2013). Extra-terrestrial amino acids identified in metal-rich CH and CB carbonaceous chondrites from Antarctica. Meteoritics & Planetary Science. 48. pp. 390–402. Bibcode:2013M&PS...48..390B. doi:10.1111/maps.12063.
- Andrew M. Davis; Lawrence Grossman; R. Ganapathy (1977). "Yes, Kakangari is a unique chondrite". Nature. 265 (5591): 230–232. Bibcode:1977Natur.265..230D. doi:10.1038/265230a0. 0028-0836, 230–232.
- Michael K. Weisberga; Martin Prinza; Robert N. Claytonb; Toshiko K. Mayedab; Monica M. Gradyc; Ian Franchid; Colin T. Pillingerd; Gregory W. Kallemeyne (1996). "The K (Kakangari) chondrite grouplet". Geochimica et Cosmochimica Acta. 60 (21): 4253–4263. Bibcode:1996GeCoA..60.4253W. doi:10.1016/S0016-7037(96)00233-5. 0016-7037, 4253–4263.
- Meteorites.tv. Meteorites for Science, Education & Collectors. "R Group (Rumurutiites)". Archived from the original on 18 April 2013. Retrieved 28 April 2009.
- Grevesse and Sauval (2005) in Encyclopedia of Astronomy & Astrophysics, IOP Publishing, Ltd.
- Meteorite Museum. University of New Mexico. Institute of Meteoritics. "Asteroid Geology: Water". Archived from the original on 15 December 2012. Retrieved 28 April 2009.
- Drake, Michael J.; Righter, Kevin (2001). "Where did Earth's water come from?". GSA Annual Meeting. 109.
- Jörn Müller; Harald Lesch (2003). "Woher kommt das Wasser der Erde? – Urgaswolke oder Meteoriten". Chemie in unserer Zeit (in German). 37 (4): 242–246. doi:10.1002/ciuz.200300282. ISSN 0009-2851.
- Jordi Llorca i Piqué (2004). "Moléculas orgánicas en el sistema solar: ¿dónde y cómo encontrarlas?". II Curso de Ciencias Planetarias de la Universidad de Salamanca (in Spanish).
- Hyman Hartman; Michael A. Sweeney; Michael A. Kropp; John S. Lewis (1993). "Carbonaceous chondrites and the origin of life". Origins of Life and Evolution of Biospheres. 23 (4): 221–227. Bibcode:1993OLEB...23..221H. doi:10.1007/BF01581900. 0169-6149, 221–227.
- Kvenvolden, Keith A.; Lawless, James; Pering, Katherine; Peterson, Etta; Flores, Jose; Ponnamperuma, Cyril; Kaplan, Isaac R.; Moore, Carleton (1970). "Evidence for extraterrestrial amino-acids and hydrocarbons in the Murchison meteorite". Nature. 228 (5275): 923–926. Bibcode:1970Natur.228..923K. doi:10.1038/228923a0. PMID 5482102.
- Сarnegie Institution for Science (13 March 2008). "Meteorites a Rich Source for Primordial Soup". Retrieved 30 April 2009.
|Wikimedia Commons has media related to Chondrites.|
|
THAT LEAD TO
HERE ARE SOME EXAMPLES of problems that lead to simultaneous equations.
Example 1. Andre has more money than Bob. If Andre gave Bob $20, they would have the same amount. While if Bob gave Andre $22, Andre would then have twice as much as Bob. How much does each one actually have?
Solution. Let x be the amount of money that Andre has. Let y be the amount that Bob has.
Always let x and y answer the question -- and be perfectly clear about what they represent!
Now there are two unknowns. Therefore there must be two equations. (In general, the number of equations must equal the number of unknowns.) How can we get two equations out of the given information? We must translate each verbal sentence into the language of algebra.
Here is the first sentence:
"If Andre gave Bob $20, they would have the same amount."
1) x − 20 = y + 20.
(Andre -- x -- has the same amount as Bob, after he gives him $20.)
Here is the second sentence:
"While if Bob gave Andre $22, Andre would then have twice as much
2) x + 22 = 2(y − 22).
(Andre has twice as much as Bob -- after Bob gives him $22.)
To solve any system of two equations, we must reduce it to one equation in one of the unknowns. In this example, we can solve equation 1) for x --
-- and substitute it into equation 2):
Bob has $106. Therefore, according to the exression for x, Andre has
106 + 40 = $146.
Example 2. 1000 tickets were sold. Adult tickets cost $8.50, children's cost $4.50, and a total of $7300 was collected. How many tickets of each kind were sold?
Solution. Let x be the number of adult tickets. Let y be the number of children's tickets.
Again, we have let x and y answer the question. And again we must get two equations out of the given information. Here they are:
In equation 2), we will make the coefficients into whole numbers by multiplying both sides of the equation by 10:
We call the second equation 2' ("2 prime") to show that we obtained it from equation 2).
These simultaneous equations are solved in the usual way.
The solutions are: x = 700, y = 300.
To see the answer, pass your mouse over the colored area.
Example 3. Mrs. B. invested $30,000; part at 5%, and part at 8%. The total interest on the investment was $2,100. How much did she invest at each rate?
(To change a percent to a decimal, see Skill in Arithmetic, Lesson 4.)
Again, in equation 2) let us make the coefficients whole numbers by multiplying both sides of the equation by 100:
These are the simultaneous equations to solve.
The solutions are: x = $10,000, y = $20,000.
Problem. Samantha has 30 coins, consisting of quarters and dimes, which total $5.70. How many of each does she have?
To see the answer, pass your mouse from left to right
Let x be the number of quarters. Let y be the number of dimes.
The equations are:
To eliminate y:
Multiply equation 1) by −10 and equation 2) by 100:
Therefore, y = 30 − 18 = 12.
Example 4. Mixture problem 1. First:
"36 gallons of a 25% alcohol solution"
means: 25%, or one quarter, of the solution is pure alcohol.
One quarter of 36 is 9. That solution contains 9 gallons of pure alcohol.
Here is the problem:
How many gallons of 30% alcohol solution and how many of 60% alcohol solution must be mixed to produce 18 gallons of 50% solution?
"18 gallons of 50% solution" means: 50%, or half, is pure alcohol. The final solution, then, will have 9 gallons of pure alcohol.
Let x be the number of gallons of 30% solution. Let y be the number of gallons of 60% solution.
Equations 1) and 2') are the two equations in the two unknowns.
The solutions are: x = 6 gallons, y = 12 gallons.
Example 5. Mixture problem 2. A saline solution is 20% salt. How much water must you add to how much saline solution, in order to dilute it to 8 gallons of 15% solution?
(This is more an arithmetic problem than an algebra problem.)
Solution. Let s be the number of gallons of saline solution. Now all the salt will come from those s gallons. So the question is, What is s so that 20% of s -- the salt -- will be 15% of 8 gallons?
.2s = .15 × 8 = 1.2
2s = 12.
s = 6.
Therefore, to 6 gallons of saline solution you must add 2 gallons of water.
Example 6. Upstream/Downstream problem. It takes 3 hours for a boat to travel 27 miles upstream. The same boat can travel 30 miles downstream in 2 hours. Find the speeds of the boat and the current.
Solution. Let x be the speed of the boat (without a current). Let y be the speed of the current.
The student might review the meanings of "upstream" and "downstream," Lesson 25. We saw there that speed, or velocity, is distance divided by time:
Therefore, according to the problem:
Here are the equations:
(The solutions are: x = 12 mph, y = 3 mph.)
Please make a donation to keep TheMathPage online.
Copyright © 2017 Lawrence Spector
Questions or comments?
|
In physiology, an action potential is a short-lasting event in which the electrical membrane potential of a cell rapidly rises and falls, following a consistent trajectory. Action potentials occur in several types of animal cells, called excitable cells, which include neurons, muscle cells, and endocrine cells, as well as in some plant cells. In neurons, action potentials play a central role in cell-to-cell communication by providing for (or assisting in, with regard to saltatory conduction) the propagation of signals along the neuron's axon towards boutons at the axon ends which can then connect with other neurons at synapses, or to motor cells or glands. In other types of cells, their main function is to activate intracellular processes. In muscle cells, for example, an action potential is the first step in the chain of events leading to contraction. In beta cells of the pancreas, they provoke release of insulin.[a] Action potentials in neurons are also known as "nerve impulses" or "spikes", and the temporal sequence of action potentials generated by a neuron is called its "spike train". A neuron that emits an action potential is often said to "fire".
Action potentials are generated by special types of voltage-gated ion channels embedded in a cell's plasma membrane.[b] These channels are shut when the membrane potential is near the resting potential of the cell, but they rapidly begin to open if the membrane potential increases to a precisely defined threshold value. When the channels open (in response to depolarization in transmembrane voltage[b]), they allow an inward flow of sodium ions, which changes the electrochemical gradient, which in turn produces a further rise in the membrane potential. This then causes more channels to open, producing a greater electric current across the cell membrane, and so on. The process proceeds explosively until all of the available ion channels are open, resulting in a large upswing in the membrane potential. The rapid influx of sodium ions causes the polarity of the plasma membrane to reverse, and the ion channels then rapidly inactivate. As the sodium channels close, sodium ions can no longer enter the neuron, and then they are actively transported back out of the plasma membrane. Potassium channels are then activated, and there is an outward current of potassium ions, returning the electrochemical gradient to the resting state. After an action potential has occurred, there is a transient negative shift, called the afterhyperpolarization or refractory period, due to additional potassium currents. This mechanism prevents an action potential from traveling back the way it just came.
In animal cells, there are two primary types of action potentials. One type is generated by voltage-gated sodium channels, the other by voltage-gated calcium channels. Sodium-based action potentials usually last for under one millisecond, whereas calcium-based action potentials may last for 100 milliseconds or longer. In some types of neurons, slow calcium spikes provide the driving force for a long burst of rapidly emitted sodium spikes. In cardiac muscle cells, on the other hand, an initial fast sodium spike provides a "primer" to provoke the rapid onset of a calcium spike, which then produces muscle contraction.
- 1 Overview
- 2 Biophysical basis
- 3 Neurotransmission
- 4 Phases
- 5 Propagation
- 6 Termination
- 7 Other cell types
- 8 Taxonomic distribution and evolutionary advantages
- 9 Experimental methods
- 10 Neurotoxins
- 11 History
- 12 Quantitative models
- 13 See also
- 14 Notes
- 15 Footnotes
- 16 References
- 17 Further reading
- 18 External links
Nearly all cell membranes in animals, plants and fungi maintain a voltage difference between the exterior and interior of the cell, called the membrane potential. A typical voltage across an animal cell membrane is –70 mV; this means that the interior of the cell has a negative voltage of approximately one-fifteenth of a volt relative to the exterior. In most types of cells the membrane potential usually stays fairly constant. Some types of cells, however, are electrically active in the sense that their voltages fluctuate over time. In some types of electrically active cells, including neurons and muscle cells, the voltage fluctuations frequently take the form of a rapid upward spike followed by a rapid fall. These up-and-down cycles are known as action potentials. In some types of neurons the entire up-and-down cycle takes place in a few thousandths of a second. In muscle cells, a typical action potential lasts about a fifth of a second. In some other types of cells, and in plants, an action potential may last three seconds or more.
The electrical properties of a cell are determined by the structure of the membrane that surrounds it. A cell membrane consists of a lipid bilayer of molecules in which larger protein molecules are embedded. The lipid bilayer is highly resistant to movement of electrically charged ions, so it functions as an insulator. The large membrane-embedded proteins, in contrast, provide channels through which ions can pass across the membrane. Action potentials are driven by channel proteins whose configuration switches between closed and open states as a function of the voltage difference between the interior and exterior of the cell. These voltage-sensitive proteins are known as voltage-gated ion channels.
Process in a typical neuron
All cells in animal body tissues are electrically polarized – in other words, they maintain a voltage difference across the cell's plasma membrane, known as the membrane potential. This electrical polarization results from a complex interplay between protein structures embedded in the membrane called ion pumps and ion channels. In neurons, the types of ion channels in the membrane usually vary across different parts of the cell, giving the dendrites, axon, and cell body different electrical properties. As a result, some parts of the membrane of a neuron may be excitable (capable of generating action potentials), whereas others are not. Recent studies have shown that the most excitable part of a neuron is the part after the axon hillock (the point where the axon leaves the cell body), which is called the initial segment, but the axon and cell body are also excitable in most cases.
Each excitable patch of membrane has two important levels of membrane potential: the resting potential, which is the value the membrane potential maintains as long as nothing perturbs the cell, and a higher value called the threshold potential. At the axon hillock of a typical neuron, the resting potential is around –70 millivolts (mV) and the threshold potential is around –55 mV. Synaptic inputs to a neuron cause the membrane to depolarize or hyperpolarize; that is, they cause the membrane potential to rise or fall. Action potentials are triggered when enough depolarization accumulates to bring the membrane potential up to threshold. When an action potential is triggered, the membrane potential abruptly shoots upward and then equally abruptly shoots back downward, often ending below the resting level, where it remains for some period of time. The shape of the action potential is stereotyped; that is, the rise and fall usually have approximately the same amplitude and time course for all action potentials in a given cell. (Exceptions are discussed later in the article.) In most neurons, the entire process takes place in about a thousandth of a second. Many types of neurons emit action potentials constantly at rates of up to 10–100 per second; some types, however, are much quieter, and may go for minutes or longer without emitting any action potentials.
|This section needs additional citations for verification. (February 2014) (Learn how and when to remove this template message)|
Action potentials result from the presence in a cell's membrane of special types of voltage-gated ion channels. A voltage-gated ion channel is a cluster of proteins embedded in the membrane that has three key properties:
- It is capable of assuming more than one conformation.
- At least one of the conformations creates a channel through the membrane that is permeable to specific types of ions.
- The transition between conformations is influenced by the membrane potential.
Thus, a voltage-gated ion channel tends to be open for some values of the membrane potential, and closed for others. In most cases, however, the relationship between membrane potential and channel state is probabilistic and involves a time delay. Ion channels switch between conformations at unpredictable times: The membrane potential determines the rate of transitions and the probability per unit time of each type of transition.
Voltage-gated ion channels are capable of producing action potentials because they can give rise to positive feedback loops: The membrane potential controls the state of the ion channels, but the state of the ion channels controls the membrane potential. Thus, in some situations, a rise in the membrane potential can cause ion channels to open, thereby causing a further rise in the membrane potential. An action potential occurs when this positive feedback cycle proceeds explosively. The time and amplitude trajectory of the action potential are determined by the biophysical properties of the voltage-gated ion channels that produce it. Several types of channels that are capable of producing the positive feedback necessary to generate an action potential exist. Voltage-gated sodium channels are responsible for the fast action potentials involved in nerve conduction. Slower action potentials in muscle cells and some types of neurons are generated by voltage-gated calcium channels. Each of these types comes in multiple variants, with different voltage sensitivity and different temporal dynamics.
The most intensively studied type of voltage-dependent ion channels comprises the sodium channels involved in fast nerve conduction. These are sometimes known as Hodgkin-Huxley sodium channels because they were first characterized by Alan Hodgkin and Andrew Huxley in their Nobel Prize-winning studies of the biophysics of the action potential, but can more conveniently be referred to as NaV channels. (The "V" stands for "voltage".) An NaV channel has three possible states, known as deactivated, activated, and inactivated. The channel is permeable only to sodium ions when it is in the activated state. When the membrane potential is low, the channel spends most of its time in the deactivated (closed) state. If the membrane potential is raised above a certain level, the channel shows increased probability of transitioning to the activated (open) state. The higher the membrane potential the greater the probability of activation. Once a channel has activated, it will eventually transition to the inactivated (closed) state. It tends then to stay inactivated for some time, but, if the membrane potential becomes low again, the channel will eventually transition back to the deactivated state. During an action potential, most channels of this type go through a cycle deactivated→activated→inactivated→deactivated. This is only the population average behavior, however — an individual channel can in principle make any transition at any time. However, the likelihood of a channel's transitioning from the inactivated state directly to the activated state is very low: A channel in the inactivated state is refractory until it has transitioned back to the deactivated state.
The outcome of all this is that the kinetics of the NaV channels are governed by a transition matrix whose rates are voltage-dependent in a complicated way. Since these channels themselves play a major role in determining the voltage, the global dynamics of the system can be quite difficult to work out. Hodgkin and Huxley approached the problem by developing a set of differential equations for the parameters that govern the ion channel states, known as the Hodgkin-Huxley equations. These equations have been extensively modified by later research, but form the starting point for most theoretical studies of action potential biophysics.
As the membrane potential is increased, sodium ion channels open, allowing the entry of sodium ions into the cell. This is followed by the opening of potassium ion channels that permit the exit of potassium ions from the cell. The inward flow of sodium ions increases the concentration of positively charged cations in the cell and causes depolarization, where the potential of the cell is higher than the cell's resting potential. The sodium channels close at the peak of the action potential, while potassium continues to leave the cell. The efflux of potassium ions decreases the membrane potential or hyperpolarizes the cell. For small voltage increases from rest, the potassium current exceeds the sodium current and the voltage returns to its normal resting value, typically −70 mV. However, if the voltage increases past a critical threshold, typically 15 mV higher than the resting value, the sodium current dominates. This results in a runaway condition whereby the positive feedback from the sodium current activates even more sodium channels. Thus, the cell fires, producing an action potential.[note 1] The frequency at which cellular action potentials are produced is known as its firing rate.
Currents produced by the opening of voltage-gated channels in the course of an action potential are typically significantly larger than the initial stimulating current. Thus, the amplitude, duration, and shape of the action potential are determined largely by the properties of the excitable membrane and not the amplitude or duration of the stimulus. This all-or-nothing property of the action potential sets it apart from graded potentials such as receptor potentials, electrotonic potentials, and synaptic potentials, which scale with the magnitude of the stimulus. A variety of action potential types exist in many cell types and cell compartments as determined by the types of voltage-gated channels, leak channels, channel distributions, ionic concentrations, membrane capacitance, temperature, and other factors.
The principal ions involved in an action potential are sodium and potassium cations; sodium ions enter the cell, and potassium ions leave, restoring equilibrium. Relatively few ions need to cross the membrane for the membrane voltage to change drastically. The ions exchanged during an action potential, therefore, make a negligible change in the interior and exterior ionic concentrations. The few ions that do cross are pumped out again by the continuous action of the sodium–potassium pump, which, with other ion transporters, maintains the normal ratio of ion concentrations across the membrane. Calcium cations and chloride anions are involved in a few types of action potentials, such as the cardiac action potential and the action potential in the single-cell alga Acetabularia, respectively.
Although action potentials are generated locally on patches of excitable membrane, the resulting currents can trigger action potentials on neighboring stretches of membrane, precipitating a domino-like propagation. In contrast to passive spread of electric potentials (electrotonic potential), action potentials are generated anew along excitable stretches of membrane and propagate without decay. Myelinated sections of axons are not excitable and do not produce action potentials and the signal is propagated passively as electrotonic potential. Regularly spaced unmyelinated patches, called the nodes of Ranvier, generate action potentials to boost the signal. Known as saltatory conduction, this type of signal propagation provides a favorable tradeoff of signal velocity and axon diameter. Depolarization of axon terminals, in general, triggers the release of neurotransmitter into the synaptic cleft. In addition, backpropagating action potentials have been recorded in the dendrites of pyramidal neurons, which are ubiquitous in the neocortex.[c] These are thought to have a role in spike-timing-dependent plasticity.
Anatomy of a neuron
Several types of cells support an action potential, such as plant cells, muscle cells, and the specialized cells of the heart (in which occurs the cardiac action potential). However, the main excitable cell is the neuron, which also has the simplest mechanism for the action potential.
Neurons are electrically excitable cells composed, in general, of one or more dendrites, a single soma, a single axon and one or more axon terminals. Dendrites are cellular projections whose primary function is to receive synaptic signals. Their protrusions, or spines, are designed to capture the neurotransmitters released by the presynaptic neuron. They have a high concentration of ligand-gated ion channels. These spines have a thin neck connecting a bulbous protrusion to the dendrite. This ensures that changes occurring inside the spine are less likely to affect the neighboring spines. The dendritic spine can, with rare exception (see LTP), act as an independent unit. The dendrites extend from the soma, which houses the nucleus, and many of the "normal" eukaryotic organelles. Unlike the spines, the surface of the soma is populated by voltage activated ion channels. These channels help transmit the signals generated by the dendrites. Emerging out from the soma is the axon hillock. This region is characterized by having a very high concentration of voltage-activated sodium channels. In general, it is considered to be the spike initiation zone for action potentials. Multiple signals generated at the spines, and transmitted by the soma all converge here. Immediately after the axon hillock is the axon. This is a thin tubular protrusion traveling away from the soma. The axon is insulated by a myelin sheath. Myelin is composed of either Schwann cells (in the peripheral nervous system) or oligodendrocytes (in the central nervous system), both of which are types of glial cells. Although glial cells are not involved with the transmission of electrical signals, they communicate and provide important biochemical support to neurons. To be specific, myelin wraps multiple times around the axonal segment, forming a thick fatty layer that prevents ions from entering or escaping the axon. This insulation prevents significant signal decay as well as ensuring faster signal speed. This insulation, however, has the restriction that no channels can be present on the surface of the axon. There are, therefore, regularly spaced patches of membrane, which have no insulation. These nodes of Ranvier can be considered to be "mini axon hillocks", as their purpose is to boost the signal in order to prevent significant signal decay. At the furthest end, the axon loses its insulation and begins to branch into several axon terminals. These presynaptic terminals, or synaptic boutons, are a specialized area within the axon of the presynaptic cell that contains neurotransmitters enclosed in small membrane-bound spheres called synaptic vesicles.
Before considering the propagation of action potentials along axons and their termination at the synaptic knobs, it is helpful to consider the methods by which action potentials can be initiated at the axon hillock. The basic requirement is that the membrane voltage at the hillock be raised above the threshold for firing. There are several ways in which this depolarization can occur.
Action potentials are most commonly initiated by excitatory postsynaptic potentials from a presynaptic neuron. Typically, neurotransmitter molecules are released by the presynaptic neuron. These neurotransmitters then bind to receptors on the postsynaptic cell. This binding opens various types of ion channels. This opening has the further effect of changing the local permeability of the cell membrane and, thus, the membrane potential. If the binding increases the voltage (depolarizes the membrane), the synapse is excitatory. If, however, the binding decreases the voltage (hyperpolarizes the membrane), it is inhibitory. Whether the voltage is increased or decreased, the change propagates passively to nearby regions of the membrane (as described by the cable equation and its refinements). Typically, the voltage stimulus decays exponentially with the distance from the synapse and with time from the binding of the neurotransmitter. Some fraction of an excitatory voltage may reach the axon hillock and may (in rare cases) depolarize the membrane enough to provoke a new action potential. More typically, the excitatory potentials from several synapses must work together at nearly the same time to provoke a new action potential. Their joint efforts can be thwarted, however, by the counteracting inhibitory postsynaptic potentials.
Neurotransmission can also occur through electrical synapses. Due to the direct connection between excitable cells in the form of gap junctions, an action potential can be transmitted directly from one cell to the next in either direction. The free flow of ions between cells enables rapid non-chemical-mediated transmission. Rectifying channels ensure that action potentials move only in one direction through an electrical synapse. Electrical synapses are found in all nervous systems, including the human brain, although they are a distinct minority.
The amplitude of an action potential is independent of the amount of current that produced it. In other words, larger currents do not create larger action potentials. Therefore, action potentials are said to be all-or-none signals, since either they occur fully or they do not occur at all.[d][e][f] This is in contrast to receptor potentials, whose amplitudes are dependent on the intensity of a stimulus. In both cases, the frequency of action potentials is correlated with the intensity of a stimulus.
In sensory neurons, an external signal such as pressure, temperature, light, or sound is coupled with the opening and closing of ion channels, which in turn alter the ionic permeabilities of the membrane and its voltage. These voltage changes can again be excitatory (depolarizing) or inhibitory (hyperpolarizing) and, in some sensory neurons, their combined effects can depolarize the axon hillock enough to provoke action potentials. Examples in humans include the olfactory receptor neuron and Meissner's corpuscle, which are critical for the sense of smell and touch, respectively. However, not all sensory neurons convert their external signals into action potentials; some do not even have an axon! Instead, they may convert the signal into the release of a neurotransmitter, or into continuous graded potentials, either of which may stimulate subsequent neuron(s) into firing an action potential. For illustration, in the human ear, hair cells convert the incoming sound into the opening and closing of mechanically gated ion channels, which may cause neurotransmitter molecules to be released. In similar manner, in the human retina, the initial photoreceptor cells and the next layer of cells (comprising bipolar cells and horizontal cells) do not produce action potentials; only some amacrine cells and the third layer, the ganglion cells, produce action potentials, which then travel up the optic nerve.
In sensory neurons, action potentials result from an external stimulus. However, some excitable cells require no such stimulus to fire: They spontaneously depolarize their axon hillock and fire action potentials at a regular rate, like an internal clock. The voltage traces of such cells are known as pacemaker potentials. The cardiac pacemaker cells of the sinoatrial node in the heart provide a good example.[g] Although such pacemaker potentials have a natural rhythm, it can be adjusted by external stimuli; for instance, heart rate can be altered by pharmaceuticals as well as signals from the sympathetic and parasympathetic nerves. The external stimuli do not cause the cell's repetitive firing, but merely alter its timing. In some cases, the regulation of frequency can be more complex, leading to patterns of action potentials, such as bursting.
The course of the action potential can be divided into five parts: the rising phase, the peak phase, the falling phase, the undershoot phase, and the refractory period. During the rising phase the membrane potential depolarizes (becomes more positive). The point at which depolarization stops is called the peak phase. At this stage, the membrane potential reaches a maximum. Subsequent to this, there is a falling phase. During this stage the membrane potential becomes more negative, returning towards resting potential. The undershoot, or afterhyperpolarization, phase is the period during which the membrane potential temporarily becomes more negatively charged than when at rest (hyperpolarized). Finally, the time during which a subsequent action potential is impossible or difficult to fire is called the refractory period, which may overlap with the other phases.
The course of the action potential is determined by two coupled effects. First, voltage-sensitive ion channels open and close in response to changes in the membrane voltage Vm. This changes the membrane's permeability to those ions. Second, according to the Goldman equation, this change in permeability changes in the equilibrium potential Em, and, thus, the membrane voltage Vm.[h] Thus, the membrane potential affects the permeability, which then further affects the membrane potential. This sets up the possibility for positive feedback, which is a key part of the rising phase of the action potential. A complicating factor is that a single ion channel may have multiple internal "gates" that respond to changes in Vm in opposite ways, or at different rates.[i] For example, although raising Vm opens most gates in the voltage-sensitive sodium channel, it also closes the channel's "inactivation gate", albeit more slowly. Hence, when Vm is raised suddenly, the sodium channels open initially, but then close due to the slower inactivation.
The voltages and currents of the action potential in all of its phases were modeled accurately by Alan Lloyd Hodgkin and Andrew Huxley in 1952,[i] for which they were awarded the Nobel Prize in Physiology or Medicine in 1963.[β] However, their model considers only two types of voltage-sensitive ion channels, and makes several assumptions about them, e.g., that their internal gates open and close independently of one another. In reality, there are many types of ion channels, and they do not always open and close independently.[j]
Stimulation and rising phase
A typical action potential begins at the axon hillock with a sufficiently strong depolarization, e.g., a stimulus that increases Vm. This depolarization is often caused by the injection of extra sodium cations into the cell; these cations can come from a wide variety of sources, such as chemical synapses, sensory neurons or pacemaker potentials.
For a neuron at rest, there is a high concentration of sodium and chlorine ions in the extracellular fluid compared to the intracellular fluid while there is a high concentration of potassium ions in the intracellular fluid compared to the extracellular fluid. This concentration gradient along with potassium leak channels present on the membrane of the neuron causes an efflux of potassium ions making the resting potential close to EK≈ –75 mV. The depolarization opens both the sodium and potassium channels in the membrane, allowing the ions to flow into and out of the axon, respectively. If the depolarization is small (say, increasing Vm from −70 mV to −60 mV), the outward potassium current overwhelms the inward sodium current and the membrane repolarizes back to its normal resting potential around −70 mV.However, if the depolarization is large enough, the inward sodium current increases more than the outward potassium current and a runaway condition (positive feedback) results: the more inward current there is, the more Vm increases, which in turn further increases the inward current. A sufficiently strong depolarization (increase in Vm) causes the voltage-sensitive sodium channels to open; the increasing permeability to sodium drives Vm closer to the sodium equilibrium voltage ENa≈ +55 mV. The increasing voltage in turn causes even more sodium channels to open, which pushes Vm still further towards ENa. This positive feedback continues until the sodium channels are fully open and Vm is close to ENa. The sharp rise in Vm and sodium permeability correspond to the rising phase of the action potential.
The critical threshold voltage for this runaway condition is usually around −45 mV, but it depends on the recent activity of the axon. A membrane that has just fired an action potential cannot fire another one immediately, since the ion channels have not returned to the deactivated state. The period during which no new action potential can be fired is called the absolute refractory period. At longer times, after some but not all of the ion channels have recovered, the axon can be stimulated to produce another action potential, but with a higher threshold, requiring a much stronger depolarization, e.g., to −30 mV. The period during which action potentials are unusually difficult to evoke is called the relative refractory period.
Peak and falling phase
The positive feedback of the rising phase slows and comes to a halt as the sodium ion channels become maximally open. At the peak of the action potential, the sodium permeability is maximized and the membrane voltage Vm is nearly equal to the sodium equilibrium voltage ENa. However, the same raised voltage that opened the sodium channels initially also slowly shuts them off, by closing their pores; the sodium channels become inactivated. This lowers the membrane's permeability to sodium relative to potassium, driving the membrane voltage back towards the resting value. At the same time, the raised voltage opens voltage-sensitive potassium channels; the increase in the membrane's potassium permeability drives Vm towards EK. Combined, these changes in sodium and potassium permeability cause Vm to drop quickly, repolarizing the membrane and producing the "falling phase" of the action potential.
The raised voltage opened many more potassium channels than usual, and some of these do not close right away when the membrane returns to its normal resting voltage. In addition, further potassium channels open in response to the influx of calcium ions during the action potential. The potassium permeability of the membrane is transiently unusually high, driving the membrane voltage Vm even closer to the potassium equilibrium voltage EK. Hence, there is an undershoot or hyperpolarization, termed an afterhyperpolarization in technical language, that persists until the membrane potassium permeability returns to its usual value.
Each action potential is followed by a refractory period, which can be divided into an absolute refractory period, during which it is impossible to evoke another action potential, and then a relative refractory period, during which a stronger-than-usual stimulus is required. These two refractory periods are caused by changes in the state of sodium and potassium channel molecules. When closing after an action potential, sodium channels enter an "inactivated" state, in which they cannot be made to open regardless of the membrane potential—this gives rise to the absolute refractory period. Even after a sufficient number of sodium channels have transitioned back to their resting state, it frequently happens that a fraction of potassium channels remains open, making it difficult for the membrane potential to depolarize, and thereby giving rise to the relative refractory period. Because the density and subtypes of potassium channels may differ greatly between different types of neurons, the duration of the relative refractory period is highly variable.
The absolute refractory period is largely responsible for the unidirectional propagation of action potentials along axons. At any given moment, the patch of axon behind the actively spiking part is refractory, but the patch in front, not having been activated recently, is capable of being stimulated by the depolarization from the action potential.
The action potential generated at the axon hillock propagates as a wave along the axon. The currents flowing inwards at a point on the axon during an action potential spread out along the axon, and depolarize the adjacent sections of its membrane. If sufficiently strong, this depolarization provokes a similar action potential at the neighboring membrane patches. This basic mechanism was demonstrated by Alan Lloyd Hodgkin in 1937. After crushing or cooling nerve segments and thus blocking the action potentials, he showed that an action potential arriving on one side of the block could provoke another action potential on the other, provided that the blocked segment was sufficiently short.[k]
Once an action potential has occurred at a patch of membrane, the membrane patch needs time to recover before it can fire again. At the molecular level, this absolute refractory period corresponds to the time required for the voltage-activated sodium channels to recover from inactivation, i.e., to return to their closed state. There are many types of voltage-activated potassium channels in neurons, some of them inactivate fast (A-type currents) and some of them inactivate slowly or not inactivate at all; this variability guarantees that there will be always an available source of current for repolarization, even if some of the potassium channels are inactivated because of preceding depolarization. On the other hand, all neuronal voltage-activated sodium channels inactivate within several millisecond during strong depolarization, thus making following depolarization impossible until a substantial fraction of sodium channels have returned to their closed state. Although it limits the frequency of firing, the absolute refractory period ensures that the action potential moves in only one direction along an axon. The currents flowing in due to an action potential spread out in both directions along the axon. However, only the unfired part of the axon can respond with an action potential; the part that has just fired is unresponsive until the action potential is safely out of range and cannot restimulate that part. In the usual orthodromic conduction, the action potential propagates from the axon hillock towards the synaptic knobs (the axonal termini); propagation in the opposite direction—known as antidromic conduction—is very rare. However, if a laboratory axon is stimulated in its middle, both halves of the axon are "fresh", i.e., unfired; then two action potentials will be generated, one traveling towards the axon hillock and the other traveling towards the synaptic knobs.
Myelin and saltatory conduction
In order to enable fast and efficient transduction of electrical signals in the nervous system, certain neuronal axons are covered with myelin sheaths. Myelin is a multilamellar membrane that enwraps the axon in segments separated by intervals known as nodes of Ranvier. It is produced by specialized cells: Schwann cells exclusively in the peripheral nervous system, and oligodendrocytes exclusively in the central nervous system. Myelin sheath reduces membrane capacitance and increases membrane resistance in the inter-node intervals, thus allowing a fast, saltatory movement of action potentials from node to node.[l][m][n] Myelination is found mainly in vertebrates, but an analogous system has been discovered in a few invertebrates, such as some species of shrimp.[o] Not all neurons in vertebrates are myelinated; for example, axons of the neurons comprising the autonomous nervous system are not, in general, myelinated.
Myelin prevents ions from entering or leaving the axon along myelinated segments. As a general rule, myelination increases the conduction velocity of action potentials and makes them more energy-efficient. Whether saltatory or not, the mean conduction velocity of an action potential ranges from 1 meter per second (m/s) to over 100 m/s, and, in general, increases with axonal diameter.[p]
Action potentials cannot propagate through the membrane in myelinated segments of the axon. However, the current is carried by the cytoplasm, which is sufficient to depolarize the first or second subsequent node of Ranvier. Instead, the ionic current from an action potential at one node of Ranvier provokes another action potential at the next node; this apparent "hopping" of the action potential from node to node is known as saltatory conduction. Although the mechanism of saltatory conduction was suggested in 1925 by Ralph Lillie,[q] the first experimental evidence for saltatory conduction came from Ichiji Tasaki[r] and Taiji Takeuchi[s] and from Andrew Huxley and Robert Stämpfli.[t] By contrast, in unmyelinated axons, the action potential provokes another in the membrane immediately adjacent, and moves continuously down the axon like a wave.
Myelin has two important advantages: fast conduction speed and energy efficiency. For axons larger than a minimum diameter (roughly 1 micrometre), myelination increases the conduction velocity of an action potential, typically tenfold.[v] Conversely, for a given conduction velocity, myelinated fibers are smaller than their unmyelinated counterparts. For example, action potentials move at roughly the same speed (25 m/s) in a myelinated frog axon and an unmyelinated squid giant axon, but the frog axon has a roughly 30-fold smaller diameter and 1000-fold smaller cross-sectional area. Also, since the ionic currents are confined to the nodes of Ranvier, far fewer ions "leak" across the membrane, saving metabolic energy. This saving is a significant selective advantage, since the human nervous system uses approximately 20% of the body's metabolic energy.[v]
The length of axons' myelinated segments is important to the success of saltatory conduction. They should be as long as possible to maximize the speed of conduction, but not so long that the arriving signal is too weak to provoke an action potential at the next node of Ranvier. In nature, myelinated segments are generally long enough for the passively propagated signal to travel for at least two nodes while retaining enough amplitude to fire an action potential at the second or third node. Thus, the safety factor of saltatory conduction is high, allowing transmission to bypass nodes in case of injury. However, action potentials may end prematurely in certain places where the safety factor is low, even in unmyelinated neurons; a common example is the branch point of an axon, where it divides into two axons.
Some diseases degrade myelin and impair saltatory conduction, reducing the conduction velocity of action potentials.[w] The most well-known of these is multiple sclerosis, in which the breakdown of myelin impairs coordinated movement.
The flow of currents within an axon can be described quantitatively by cable theory and its elaborations, such as the compartmental model. Cable theory was developed in 1855 by Lord Kelvin to model the transatlantic telegraph cable[x] and was shown to be relevant to neurons by Hodgkin and Rushton in 1946.[y] In simple cable theory, the neuron is treated as an electrically passive, perfectly cylindrical transmission cable, which can be described by a partial differential equation
where V(x, t) is the voltage across the membrane at a time t and a position x along the length of the neuron, and where λ and τ are the characteristic length and time scales on which those voltages decay in response to a stimulus. Referring to the circuit diagram on the right, these scales can be determined from the resistances and capacitances per unit length.
These time and length-scales can be used to understand the dependence of the conduction velocity on the diameter of the neuron in unmyelinated fibers. For example, the time-scale τ increases with both the membrane resistance rm and capacitance cm. As the capacitance increases, more charge must be transferred to produce a given transmembrane voltage (by the equation Q=CV); as the resistance increases, less charge is transferred per unit time, making the equilibration slower. In similar manner, if the internal resistance per unit length ri is lower in one axon than in another (e.g., because the radius of the former is larger), the spatial decay length λ becomes longer and the conduction velocity of an action potential should increase. If the transmembrane resistance rm is increased, that lowers the average "leakage" current across the membrane, likewise causing λ to become longer, increasing the conduction velocity.
In general, action potentials that reach the synaptic knobs cause a neurotransmitter to be released into the synaptic cleft.[z] Neurotransmitters are small molecules that may open ion channels in the postsynaptic cell; most axons have the same neurotransmitter at all of their termini. The arrival of the action potential opens voltage-sensitive calcium channels in the presynaptic membrane; the influx of calcium causes vesicles filled with neurotransmitter to migrate to the cell's surface and release their contents into the synaptic cleft.[aa] This complex process is inhibited by the neurotoxins tetanospasmin and botulinum toxin, which are responsible for tetanus and botulism, respectively.[ab]
Some synapses dispense with the "middleman" of the neurotransmitter, and connect the presynaptic and postsynaptic cells together.[ac] When an action potential reaches such a synapse, the ionic currents flowing into the presynaptic cell can cross the barrier of the two cell membranes and enter the postsynaptic cell through pores known as connexons.[ad] Thus, the ionic currents of the presynaptic action potential can directly stimulate the postsynaptic cell. Electrical synapses allow for faster transmission because they do not require the slow diffusion of neurotransmitters across the synaptic cleft. Hence, electrical synapses are used whenever fast response and coordination of timing are crucial, as in escape reflexes, the retina of vertebrates, and the heart.
A special case of a chemical synapse is the neuromuscular junction, in which the axon of a motor neuron terminates on a muscle fiber.[ae] In such cases, the released neurotransmitter is acetylcholine, which binds to the acetylcholine receptor, an integral membrane protein in the membrane (the sarcolemma) of the muscle fiber.[af] However, the acetylcholine does not remain bound; rather, it dissociates and is hydrolyzed by the enzyme, acetylcholinesterase, located in the synapse. This enzyme quickly reduces the stimulus to the muscle, which allows the degree and timing of muscular contraction to be regulated delicately. Some poisons inactivate acetylcholinesterase to prevent this control, such as the nerve agents sarin and tabun,[ag] and the insecticides diazinon and malathion.[ah]
Other cell types
Cardiac action potentials
The cardiac action potential differs from the neuronal action potential by having an extended plateau, in which the membrane is held at a high voltage for a few hundred milliseconds prior to being repolarized by the potassium current as usual.[ai] This plateau is due to the action of slower calcium channels opening and holding the membrane voltage near their equilibrium potential even after the sodium channels have inactivated.
The cardiac action potential plays an important role in coordinating the contraction of the heart.[ai] The cardiac cells of the sinoatrial node provide the pacemaker potential that synchronizes the heart. The action potentials of those cells propagate to and through the atrioventricular node (AV node), which is normally the only conduction pathway between the atria and the ventricles. Action potentials from the AV node travel through the bundle of His and thence to the Purkinje fibers.[note 2] Conversely, anomalies in the cardiac action potential—whether due to a congenital mutation or injury—can lead to human pathologies, especially arrhythmias.[ai] Several anti-arrhythmia drugs act on the cardiac action potential, such as quinidine, lidocaine, beta blockers, and verapamil.[aj]
Muscular action potentials
The action potential in a normal skeletal muscle cell is similar to the action potential in neurons. Action potentials result from the depolarization of the cell membrane (the sarcolemma), which opens voltage-sensitive sodium channels; these become inactivated and the membrane is repolarized through the outward current of potassium ions. The resting potential prior to the action potential is typically −90mV, somewhat more negative than typical neurons. The muscle action potential lasts roughly 2–4 ms, the absolute refractory period is roughly 1–3 ms, and the conduction velocity along the muscle is roughly 5 m/s. The action potential releases calcium ions that free up the tropomyosin and allow the muscle to contract. Muscle action potentials are provoked by the arrival of a pre-synaptic neuronal action potential at the neuromuscular junction, which is a common target for neurotoxins.[ag]
Plant action potentials
Plant and fungal cells [ak] are also electrically excitable. The fundamental difference to animal action potentials is that the depolarization in plant cells is not accomplished by an uptake of positive sodium ions, but by release of negative chloride ions.[al][am][an] Together with the following release of positive potassium ions, which is common to plant and animal action potentials, the action potential in plants infers, therefore, an osmotic loss of salt (KCl), whereas the animal action potential is osmotically neutral, when equal amounts of entering sodium and leaving potassium cancel each other osmotically. The interaction of electrical and osmotic relations in plant cells [ao] indicates an osmotic function of electrical excitability in the common, unicellular ancestors of plants and animals under changing salinity conditions, whereas the present function of rapid signal transmission is seen as a younger accomplishment of metazoan cells in a more stable osmotic environment. It must be assumed that the familiar signalling function of action potentials in some vascular plants (e.g. Mimosa pudica), arose independently from that in metazoan excitable cells.
Taxonomic distribution and evolutionary advantages
Action potentials are found throughout multicellular organisms, including plants, invertebrates such as insects, and vertebrates such as reptiles and mammals.[ap] Sponges seem to be the main phylum of multicellular eukaryotes that does not transmit action potentials, although some studies have suggested that these organisms have a form of electrical signaling, too.[aq] The resting potential, as well as the size and duration of the action potential, have not varied much with evolution, although the conduction velocity does vary dramatically with axonal diameter and myelination.
|Animal||Cell type||Resting potential (mV)||AP increase (mV)||AP duration (ms)||Conduction speed (m/s)|
|Squid (Loligo)||Giant axon||−60||120||0.75||35|
|Earthworm (Lumbricus)||Median giant fiber||−70||100||1.0||30|
|Cockroach (Periplaneta)||Giant fiber||−70||80–104||0.4||10|
|Frog (Rana)||Sciatic nerve axon||−60 to −80||110–130||1.0||7–30|
|Cat (Felis)||Spinal motor neuron||−55 to −80||80–110||1–1.5||30–120|
Given its conservation throughout evolution, the action potential seems to confer evolutionary advantages. One function of action potentials is rapid, long-range signaling within the organism; the conduction velocity can exceed 110 m/s, which is one-third the speed of sound. For comparison, a hormone molecule carried in the bloodstream moves at roughly 8 m/s in large arteries. Part of this function is the tight coordination of mechanical events, such as the contraction of the heart. A second function is the computation associated with its generation. Being an all-or-none signal that does not decay with transmission distance, the action potential has similar advantages to digital electronics. The integration of various dendritic signals at the axon hillock and its thresholding to form a complex train of action potentials is another form of computation, one that has been exploited biologically to form central pattern generators and mimicked in artificial neural networks.
The study of action potentials has required the development of new experimental methods. The initial work, prior to 1955, was carried out primarily by Alan Lloyd Hodgkin and Andrew Fielding Huxley, who were, along John Carew Eccles, awarded the 1963 Nobel Prize in Physiology or Medicine for their contribution to the description of the ionic basis of nerve conduction. It focused on three goals: isolating signals from single neurons or axons, developing fast, sensitive electronics, and shrinking electrodes enough that the voltage inside a single cell could be recorded.
The first problem was solved by studying the giant axons found in the neurons of the squid (Loligo forbesii and Doryteuthis pealeii, at the time classified as Loligo pealeii).[ar] These axons are so large in diameter (roughly 1 mm, or 100-fold larger than a typical neuron) that they can be seen with the naked eye, making them easy to extract and manipulate.[i][as] However, they are not representative of all excitable cells, and numerous other systems with action potentials have been studied.
The second problem was addressed with the crucial development of the voltage clamp,[at] which permitted experimenters to study the ionic currents underlying an action potential in isolation, and eliminated a key source of electronic noise, the current IC associated with the capacitance C of the membrane. Since the current equals C times the rate of change of the transmembrane voltage Vm, the solution was to design a circuit that kept Vm fixed (zero rate of change) regardless of the currents flowing across the membrane. Thus, the current required to keep Vm at a fixed value is a direct reflection of the current flowing through the membrane. Other electronic advances included the use of Faraday cages and electronics with high input impedance, so that the measurement itself did not affect the voltage being measured.
The third problem, that of obtaining electrodes small enough to record voltages within a single axon without perturbing it, was solved in 1949 with the invention of the glass micropipette electrode,[au] which was quickly adopted by other researchers.[av][aw] Refinements of this method are able to produce electrode tips that are as fine as 100 Å (10 nm), which also confers high input impedance. Action potentials may also be recorded with small metal electrodes placed just next to a neuron, with neurochips containing EOSFETs, or optically with dyes that are sensitive to Ca2+ or to voltage.[ax]
While glass micropipette electrodes measure the sum of the currents passing through many ion channels, studying the electrical properties of a single ion channel became possible in the 1970s with the development of the patch clamp by Erwin Neher and Bert Sakmann. For this they were awarded the Nobel Prize in Physiology or Medicine in 1991.[γ] Patch-clamping verified that ionic channels have discrete states of conductance, such as open, closed and inactivated.
Optical imaging technologies have been developed in recent years to measure action potentials, either via simultaneous multisite recordings or with ultra-spatial resolution. Using voltage-sensitive dyes, action potentials have been optically recorded from a tiny patch of cardiomyocyte membrane.[ay]
Several neurotoxins, both natural and synthetic, are designed to block the action potential. Tetrodotoxin from the pufferfish and saxitoxin from the Gonyaulax (the dinoflagellate genus responsible for "red tides") block action potentials by inhibiting the voltage-sensitive sodium channel;[az] similarly, dendrotoxin from the black mamba snake inhibits the voltage-sensitive potassium channel. Such inhibitors of ion channels serve an important research purpose, by allowing scientists to "turn off" specific channels at will, thus isolating the other channels' contributions; they can also be useful in purifying ion channels by affinity chromatography or in assaying their concentration. However, such inhibitors also make effective neurotoxins, and have been considered for use as chemical weapons. Neurotoxins aimed at the ion channels of insects have been effective insecticides; one example is the synthetic permethrin, which prolongs the activation of the sodium channels involved in action potentials. The ion channels of insects are sufficiently different from their human counterparts that there are few side effects in humans.
The role of electricity in the nervous systems of animals was first observed in dissected frogs by Luigi Galvani, who studied it from 1791 to 1797.[ba] Galvani's results stimulated Alessandro Volta to develop the Voltaic pile—the earliest-known electric battery—with which he studied animal electricity (such as electric eels) and the physiological responses to applied direct-current voltages.[bb]
Scientists of the 19th century studied the propagation of electrical signals in whole nerves (i.e., bundles of neurons) and demonstrated that nervous tissue was made up of cells, instead of an interconnected network of tubes (a reticulum). Carlo Matteucci followed up Galvani's studies and demonstrated that cell membranes had a voltage across them and could produce direct current. Matteucci's work inspired the German physiologist, Emil du Bois-Reymond, who discovered the action potential in 1848. The conduction velocity of action potentials was first measured in 1850 by du Bois-Reymond's friend, Hermann von Helmholtz. To establish that nervous tissue is made up of discrete cells, the Spanish physician Santiago Ramón y Cajal and his students used a stain developed by Camillo Golgi to reveal the myriad shapes of neurons, which they rendered painstakingly. For their discoveries, Golgi and Ramón y Cajal were awarded the 1906 Nobel Prize in Physiology.[δ] Their work resolved a long-standing controversy in the neuroanatomy of the 19th century; Golgi himself had argued for the network model of the nervous system.
The 20th century was a golden era for electrophysiology. In 1902 and again in 1912, Julius Bernstein advanced the hypothesis that the action potential resulted from a change in the permeability of the axonal membrane to ions.[bc] Bernstein's hypothesis was confirmed by Ken Cole and Howard Curtis, who showed that membrane conductance increases during an action potential.[bd] In 1907, Louis Lapicque suggested that the action potential was generated as a threshold was crossed,[be] what would be later shown as a product of the dynamical systems of ionic conductances. In 1949, Alan Hodgkin and Bernard Katz refined Bernstein's hypothesis by considering that the axonal membrane might have different permeabilities to different ions; in particular, they demonstrated the crucial role of the sodium permeability for the action potential.[bf] They made the first actual recording of the electrical changes across the neuronal membrane that mediate the action potential.[ε] This line of research culminated in the five 1952 papers of Hodgkin, Katz and Andrew Huxley, in which they applied the voltage clamp technique to determine the dependence of the axonal membrane's permeabilities to sodium and potassium ions on voltage and time, from which they were able to reconstruct the action potential quantitatively.[i] Hodgkin and Huxley correlated the properties of their mathematical model with discrete ion channels that could exist in several different states, including "open", "closed", and "inactivated". Their hypotheses were confirmed in the mid-1970s and 1980s by Erwin Neher and Bert Sakmann, who developed the technique of patch clamping to examine the conductance states of individual ion channels.[bg] In the 21st century, researchers are beginning to understand the structural basis for these conductance states and for the selectivity of channels for their species of ion,[bh] through the atomic-resolution crystal structures,[bi] fluorescence distance measurements[bj] and cryo-electron microscopy studies.[bk]
Julius Bernstein was also the first to introduce the Nernst equation for resting potential across the membrane; this was generalized by David E. Goldman to the eponymous Goldman equation in 1943.[h] The sodium–potassium pump was identified in 1957[bl][ζ] and its properties gradually elucidated,[bm][bn][bo] culminating in the determination of its atomic-resolution structure by X-ray crystallography.[bp] The crystal structures of related ionic pumps have also been solved, giving a broader view of how these molecular machines work.[bq]
Mathematical and computational models are essential for understanding the action potential, and offer predictions that may be tested against experimental data, providing a stringent test of a theory. The most important and accurate of the early neural models is the Hodgkin–Huxley model, which describes the action potential by a coupled set of four ordinary differential equations (ODEs).[i] Although the Hodgkin–Huxley model may be a simplification with few limitations compared to the realistic nervous membrane as it exists in nature, its complexity has inspired several even-more-simplified models,[br] such as the Morris–Lecar model[bs] and the FitzHugh–Nagumo model,[bt] both of which have only two coupled ODEs. The properties of the Hodgkin–Huxley and FitzHugh–Nagumo models and their relatives, such as the Bonhoeffer–van der Pol model,[bu] have been well-studied within mathematics,[bv] computation and electronics.[bw] However the simple models of generator potential and action potential fail to accurately reproduce the near threshold neural spike rate and spike shape, specifically for the mechanoreceptors like the Pacinian corpuscle. More modern research has focused on larger and more integrated systems; by joining action-potential models with models of other parts of the nervous system (such as dendrites and synapses), researchers can study neural computation and simple reflexes, such as escape reflexes and others controlled by central pattern generators.[bx]
- Anode break excitation
- Central pattern generator
- Neural accommodation
- Single-unit recording
- Soliton model in neuroscience
- In general, while this simple description of action potential initiation is accurate, it does not explain phenomena such as excitation block (the ability to prevent neurons from eliciting action potentials by stimulating them with large current steps) and the ability to elicit action potentials by briefly hyperpolarizing the membrane. By analyzing the dynamics of a system of sodium and potassium channels in a membrane patch using computational models, however, these phenomena are readily explained.[α]
- Note that these Purkinje fibers are muscle fibers and not related to the Purkinje cells, which are neurons found in the cerebellum.
- Purves D, Augustine GJ, Fitzpatrick D, et al., editors. Neuroscience. 2nd edition. Sunderland (MA): Sinauer Associates; 2001. Voltage-Gated Ion Channels. Available from: http://www.ncbi.nlm.nih.gov/books/NBK10883/
- Bullock, Orkand & Grinnell 1977, pp. 150-151.
- Junge 1981, pp. 89-90.
- Schmidt-Nielsen 1997, p. 484.
- Purves et al. 2008, pp. 48-49; Bullock, Orkand & Grinnell 1977, p. 141; Schmidt-Nielsen 1997, p. 483; Junge 1981, p. 89.
- Stevens 1966, p. 127.
- Schmidt-Nielsen, p. 484.
- Bullock, Orkand & Grinnell 1977, p. 11.
- Silverthorn 2010, p. 253.
- Purves et al. 2008, pp. 49-50; Bullock, Orkand & Grinnell 1977, pp. 140-141; Schmidt-Nielsen 1997, pp. 480-481.
- Schmidt-Nielsen 1997, pp. 483-484.
- Bullock, Orkand & Grinnell 1977, pp. 177-240; Schmidt-Nielsen 1997, pp. 490-499; Stevens 1966, p. 47-68.
- Bullock, Orkand & Grinnell 1977, pp. 178-180; Schmidt-Nielsen 1997, pp. 490-491.
- Purves et al. 2001.
- Purves et al. 2008, pp. 26-28.
- Schmidt-Nielsen 1997, pp. 535-580; Bullock, Orkand & Grinnell 1977, pp. 49-56, 76-93, 247-255; Stevens 1966, pp. 69-79.
- Bullock, Orkand & Grinnell 1977, pp. 53; Bullock, Orkand & Grinnell 1977, pp. 122-124.
- Junge 1981, pp. 115-132.
- Bullock, Orkand & Grinnell 1977, pp. 152-153.
- Bullock, Orkand & Grinnell 1977, pp. 444-445.
- Purves et al. 2008, p. 38.
- Stevens 1966, pp. 127-128.
- Purves et al. 2008, p. 61-65.
- Purves et al. 2008, pp. 64-74; Bullock, Orkand & Grinnell 1977, pp. 149-150; Junge 1981, pp. 84-85; Stevens 1966, pp. 152-158.
- Purves et al. 2008, p. 47; Purves et al. 2008, p. 65; Bullock, Orkand & Grinnell 1977, pp. 147-148; Stevens 1966, p. 128.
- Goldin, AL in Waxman 2007, Neuronal Channels and Receptors, pp. 43-58.
- Stevens 1966, p. 49.
- Purves et al. 2008, p. 34; Bullock, Orkand & Grinnell 1977, p. 134; Schmidt-Nielsen 1997, pp. 478-480.
- Purves et al. 2008, p. 49.
- Stevens 1966, pp. 19-20.
- Bullock, Orkand & Grinnell 1977, p. 151; Junge 1981, pp. 4-5.
- Bullock, Orkand & Grinnell 1977, p. 152.
- Bullock, Orkand & Grinnell 1977, pp. 147-149; Stevens 1966, pp. 126-127.
- Purves et al. 2008, p. 37.
- Purves et al. 2008, p. 56.
- Bullock, Orkland & Grinnell 1977, pp. 160-164.
- Stevens 1966, pp. 21-23.
- Bullock, Orkland & Grinnell 1977, pp. 161-164.
- Bullock, Orkland & Grinnell 1977, p. 509.
- Tasaki, I in Field 1959, pp. 75–121
- Schmidt-Nielsen 1997, Figure 12.13.
- Bullock, Orkland & Grinnell 1977, p. 163.
- Waxman, SG in Waxman 2007, Multiple Sclerosis as a Neurodegenerative Disease, pp. 333-346.
- Rall, W in Koch & Segev 1989, Cable Theory for Dendritic Neurons, p. 9-62.
- Segev, I; Fleshman, JW; Burke, RE in Koch & Segev 1989, Compartmental Models of Complex Neurons, pp. 63-96.
- Purves et al. 2008, pp. 52-53.
- Ganong 1991, pp. 59-60.
- Gradmann, D; Mummert, H in Spanswick, Lucas & Dainty 1980, Plant action potentials, pp. 333-344.
- Bullock 1965.
- Hellier, Jennifer L. (2014). The Brain, the Nervous System, and Their Diseases. ABC-Clio. p. 532. ISBN 9781610693387.
- Junge 1981, pp. 63-82.
- Kettenmann & Grantyn 1992.
- Snell, FM in Lavallee, Schanne & Hebert 1969, Some Electrical Properties of Fine-Tipped Pipette Microelectrodes.
- Brazier 1961; McHenry & Garrison 1969; Worden, Swazey & Adelman 1975.
- Bernstein 1912.
- Baranauskas, G.; Martina, M. (2006). "Sodium Currents Activate without a Hodgkin and Huxley-Type Delay in Central Mammalian Neurons". J. Neurosci. 26 (2): 671–684. doi:10.1523/jneurosci.2283-05.2006. PMID 16407565.
- Hoppensteadt 1986.
- Sato, S; Fukai, H; Nomura, T; Doi, S in Reeke et al. 2005, Bifurcation Analysis of the Hodgkin-Huxley Equations, pp. 459-478.
* FitzHugh, R in Schwann 1969, Mathematical models of axcitation and propagation in nerve, pp. 12-16.
* Guckenheimer & Holmes 1986, pp. 12–16
- Nelson, ME; Rinzel, J in Bower & Beeman 1995, The Hodgkin-Huxley Model, pp. 29-49.
* Rinzel, J & Ermentrout, GB; in Koch & Segev 1989, Analysis of Neural Excitability and Oscillations, pp. 135-169.
- Biswas, Abhijit; Manivannan, M.; Srinivasan, Mandyam A. (2015). "Vibrotactile Sensitivity Threshold: Nonlinear Stochastic Mechanotransduction Model of the Pacinian Corpuscle". IEEE Transactions on Haptics. 8 (1): 102–113. doi:10.1109/TOH.2014.2369422. PMID 25398183.
- McCulloch 1988, pp. 19–39, 46–66, 72–141; Anderson & Rosenfeld 1988, pp. 15-41.
- Getting, PA in Koch & Segev 1989, Reconstruction of Small Neural Networks, pp. 171-194.
- Anderson, JA; Rosenfeld, E, eds. (1988). Neurocomputing: Foundations of Research. Cambridge, Mass.: The MIT Press. ISBN 978-0-262-01097-9. LCCN 87003022. OCLC 15860311.
- Bernstein, J (1912). Elektrobiologie, die Lehre von den elektrischen Vorgängen im Organismus auf moderner Grundlage dargestellt [Electric Biology, the study of the electrical processes in the organism represented on a modern basis]. Braunschweig: Vieweg und Sohn. LCCN 12027986. OCLC 11358569.
- Bower, JM; Beeman, D (1995). The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System. Santa Clara, Calif.: TELOS. ISBN 978-0-387-94019-9. LCCN 94017624. OCLC 30518469.
- Brazier, MAB (1961). A History of the Electrical Activity of the Brain. London: Pitman. LCCN 62001407. OCLC 556863.
- Bullock, TH; Horridge, GA (1965). Structure and Function in the Nervous Systems of Invertebrates. A series of books in biology. San Francisco: W. H. Freeman. LCCN 65007965. OCLC 558128.
- Bullock, TH; Orkand, R; Grinnell, A (1977). Introduction to Nervous Systems. A series of books in biology. San Francisco: W. H. Freeman. ISBN 978-0-7167-0030-2. LCCN 76003735. OCLC 2048177.
- Field, J (ed.). Handbook of Physiology: a Critical, Comprehensive Presentation of Physiological Knowledge and Concepts: Section 1: Neurophysiology. 1. Washington, DC: American Physiological Society. LCCN 60004587. OCLC 830755894.
- Ganong, WF (1991). Review of Medical Physiology (15th ed.). Norwalk, Conn.: Appleton and Lange. ISBN 978-0-8385-8418-7. ISSN 0892-1253. LCCN 87642343. OCLC 23761261.
- Guckenheimer, J; Holmes, P (1986). Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields. Applied Mathematical Sciences. 42 (2nd ed.). New York: Springer Verlag. ISBN 978-0-387-90819-9. OCLC 751129941.
- Hoppensteadt, FC (1986). An Introduction to the Mathematics of Neurons. Cambridge studies in mathematical biology. 6. Cambridge: Cambridge University Press. ISBN 978-0-521-31574-6. LCCN 85011013. OCLC 12052275.
- Junge, D (1981). Nerve and Muscle Excitation (2nd ed.). Sunderland, Mass.: Sinauer Associates. ISBN 978-0-87893-410-2. LCCN 80018158. OCLC 6486925.
- Kettenmann, H; Grantyn, R, eds. (1992). Practical Electrophysiological Methods: A Guide for In Vitro Studies in Vertebrate Neurobiology. New York: Wiley. ISBN 978-0-471-56200-9. LCCN 92000179. OCLC 25204689.
- Keynes, RD; Aidley, DJ (1991). Nerve and Muscle (2nd ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-41042-7. LCCN 90015167. OCLC 25204483.
- Koch, C; Segev, I, eds. (1989). Methods in Neuronal Modeling: From Synapses to Networks. Cambridge, Mass.: The MIT Press. ISBN 978-0-262-11133-1. LCCN 88008279. OCLC 18384545.
- Lavallée, M; Schanne, OF; Hébert, NC, eds. (1969). Glass Microelectrodes. New York: Wiley. ISBN 978-0-471-51885-3. LCCN 68009252. OCLC 686.
- McCulloch, WS (1988). Embodiments of Mind. Cambridge, Mass.: The MIT Press. ISBN 978-0-262-63114-3. LCCN 88002987. OCLC 237280.
- McHenry, LC; Garrison, FH (1969). Garrison's History of Neurology. Springfield, Ill.: Charles C. Thomas. OCLC 429733931.
- Silverthorn, DU (2010). Human Physiology: An Integrated Approach (5th ed.). San Francisco: Pearson. ISBN 978-0-321-55980-7. LCCN 2008050369. OCLC 268788623.
- Spanswick, RM; Lucas, WJ; Dainty, J, eds. (1980). Plant Membrane Transport: Current Conceptual Issues. Developments in Plant Biology. 4. Amsterdam: Elsevier Biomedical Press. ISBN 978-0-444-80192-0. LCCN 79025719. OCLC 5799924.
- Purves, D; Augustine, GJ; Fitzpatrick, D; Hall, WC; Lamantia, A-S; McNamara, JO; Williams, SM (2001). "Release of Transmitters from Synaptic Vesicles". Neuroscience (2nd ed.). Sunderland, MA: Sinauer Associates. ISBN 978-0-87893-742-4. LCCN 00059496. OCLC 806472664.
- Purves, D; Augustine, GJ; Fitzpatrick, D; Hall, WC; Lamantia, A-S; McNamara, JO; White, LE (2008). Neuroscience (4th ed.). Sunderland, MA: Sinauer Associates. ISBN 978-0-87893-697-7. LCCN 2007024950. OCLC 144771764.
- Reeke, GN; Poznanski, RR; Sporns, O; Rosenberg, JR; Lindsay, KA, eds. (2005). Modeling in the Neurosciences: from Biological Systems to Neuromimetic Robotics. Boca Raton, Fla.: Taylor & Francis. ISBN 978-0-415-32868-5. LCCN 2005298022. OCLC 489024131.
- Schmidt-Nielsen, K (1997). Animal Physiology: Adaptation and Environment (5th ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-57098-5. LCCN 96039295. OCLC 35744403.
- Schwann, HP, ed. (1969). Biological Engineering. Inter-University Electronics Series. 9. New York: McGraw-Hill. ISBN 978-0-07-055734-5. LCCN 68027513. OCLC 51993.
- Stevens, CF (1966). Neurophysiology: A Primer. New York: John Wiley and Sons. LCCN 66015872. OCLC 1175605.
- Waxman, SG, ed. (2007). Molecular Neurology. Burlington, Mass.: Elsevier Academic Press. ISBN 978-0-12-369509-3. LCCN 2008357317. OCLC 154760295.
- Worden, FG; Swazey, JP; Adelman, G, eds. (1975). The Neurosciences, Paths of Discovery. Cambridge, Mass.: The MIT Press. ISBN 978-0-262-23072-8. LCCN 75016379. OCLC 1500233.
- MacDonald PE, Rorsman P (February 2006). "Oscillations, intercellular coupling, and insulin secretion in pancreatic beta cells". PLoS Biol. 4 (2): e49. doi:10.1371/journal.pbio.0040049. PMC . PMID 16464129.
- Barnett MW; Larkman PM (June 2007). "The action potential". Pract Neurol. 7 (3): 192–7. PMID 17515599.
- Golding NL, Kath WL, Spruston N (December 2001). "Dichotomy of action-potential backpropagation in CA1 pyramidal neuron dendrites". J. Neurophysiol. 86 (6): 2998–3010. PMID 11731556.
- Sasaki, T., Matsuki, N., Ikegaya, Y. 2011 Action-potential modulation during axonal conduction Science 331 (6017), pp. 599-601
- Aur, D.; Connolly, C.I.; Jog, M.S. (2005). "Computing spike directivity with tetrodes". Journal of Neuroscience Methods. 149 (1): 57–63. doi:10.1016/j.jneumeth.2005.05.006. PMID 15978667.
- Aur D., Jog, MS., 2010 Neuroelectrodynamics: Understanding the brain language, IOS Press, 2010. doi:10.3233/978-1-60750-473-3-i
- Noble D (1960). "Cardiac action and pacemaker potentials based on the Hodgkin-Huxley equations". Nature. 188 (4749): 495–497. Bibcode:1960Natur.188..495N. doi:10.1038/188495b0. PMID 13729365.
- Goldman DE (1943). "Potential, impedance and rectification in membranes". J. Gen. Physiol. 27 (1): 37–60. doi:10.1085/jgp.27.1.37. PMC . PMID 19873371.
- Hodgkin AL, Huxley AF, Katz B (1952). "Measurements of current-voltage relations in the membrane of the giant axon of Loligo". Journal of Physiology. 116 (4): 424–448. doi:10.1113/jphysiol.1952.sp004716. PMC . PMID 14946712.
* Hodgkin AL (1952). "Currents carried by sodium and potassium ions through the membrane of the giant axon of Loligo". Journal of Physiology. 116 (4): 449–472. doi:10.1113/jphysiol.1952.sp004717. PMC . PMID 14946713.
* Hodgkin AL (1952). "The components of membrane conductance in the giant axon of Loligo". J Physiol. 116 (4): 473–496. doi:10.1113/jphysiol.1952.sp004718. PMC . PMID 14946714.
* Hodgkin AL, Huxley (1952). "The dual effect of membrane potential on sodium conductance in the giant axon of Loligo". J Physiol. 116 (4): 497–506. doi:10.1113/jphysiol.1952.sp004719. PMC . PMID 14946715.
* Hodgkin AL, Huxley (1952). "A quantitative description of membrane current and its application to conduction and excitation in nerve". J Physiol. 117 (4): 500–544. doi:10.1113/jphysiol.1952.sp004764. PMC . PMID 12991237.
- Naundorf B, Wolf F, Volgushev M (April 2006). "Unique features of action potential initiation in cortical neurons" (Letter). Nature. 440 (7087): 1060–1063. Bibcode:2006Natur.440.1060N. doi:10.1038/nature04610. PMID 16625198. Retrieved 2008-03-27.
- Hodgkin AL (1937). "Evidence for electrical transmission in nerve, Part I". Journal of Physiology. 90 (2): 183–210. PMC . PMID 16994885.
* Hodgkin AL (1937). "Evidence for electrical transmission in nerve, Part II". Journal of Physiology. 90 (2): 211–32. PMC . PMID 16994886.
- Zalc B (2006). "The acquisition of myelin: a success story". Novartis Found. Symp. Novartis Foundation Symposia. 276: 15–21; discussion 21–5, 54–7, 275–81. doi:10.1002/9780470032244.ch3. ISBN 978-0-470-03224-4. PMID 16805421.
- S. Poliak; E. Peles (2006). "The local differentiation of myelinated axons at nodes of Ranvier". Nature Reviews Neuroscience. 12 (4): 968–80. doi:10.1038/nrn1253. PMID 14682359.
- Simons M, Trotter J (October 2007). "Wrapping it up: the cell biology of myelination". Curr. Opin. Neurobiol. 17 (5): 533–40. doi:10.1016/j.conb.2007.08.003. PMID 17923405.
- Xu K, Terakawa S (1 August 1999). "Fenestration nodes and the wide submyelinic space form the basis for the unusually fast impulse conduction of shrimp myelinated axons". J. Exp. Biol. 202 (Pt 15): 1979–89. PMID 10395528.
- Hursh JB (1939). "Conduction velocity and diameter of nerve fibers". American Journal of Physiology. 127: 131–39.
- Lillie RS (1925). "Factors affecting transmission and recovery in passive iron nerve model". J. Gen. Physiol. 7 (4): 473–507. doi:10.1085/jgp.7.4.473. PMC . PMID 19872151. See also Keynes and Aidley, p. 78.
- Tasaki I (1939). "Electro-saltatory transmission of nerve impulse and effect of narcosis upon nerve fiber". Am. J. Physiol. 127: 211–27.
- Tasaki I, Takeuchi T (1941). "Der am Ranvierschen Knoten entstehende Aktionsstrom und seine Bedeutung für die Erregungsleitung". Pflüger's Arch. Ges. Physiol. 244 (6): 696–711. doi:10.1007/BF01755414.
* Tasaki I, Takeuchi T (1942). "Weitere Studien über den Aktionsstrom der markhaltigen Nervenfaser und über die elektrosaltatorische Übertragung des nervenimpulses". Pflüger's Arch. Ges. Physiol. 245 (5): 764–82. doi:10.1007/BF01755237.
- Huxley A (1949). "Evidence for saltatory conduction in peripheral myelinated nerve-fibers". Journal of Physiology. 108 (3): 315–39. doi:10.1113/jphysiol.1949.sp004335.
* Huxley A (1949). "Direct determination of membrane resting potential and action potential in single myelinated nerve fibers". Journal of Physiology. 112 (3–4): 476–95. PMC . PMID 14825228.
- Rushton WAH (1951). "A theory of the effects of fibre size in the medullated nerve". Journal of Physiology. 115 (1): 101–22. PMC . PMID 14889433.
- Hartline DK, Colman DR (2007). "Rapid conduction and the evolution of giant axons and myelinated fibers". Curr. Biol. 17 (1): R29–R35. doi:10.1016/j.cub.2006.11.042. PMID 17208176.
- Miller RH, Mi S (2007). "Dissecting demyelination". Nat. Neurosci. 10 (11): 1351–54. doi:10.1038/nn1995. PMID 17965654.
- Kelvin WT (1855). "On the theory of the electric telegraph". Proceedings of the Royal Society. 7: 382–99. doi:10.1098/rspl.1854.0093.
- Hodgkin AL (1946). "The electrical constants of a crustacean nerve fibre". Proceedings of the Royal Society B. 133 (873): 444–79. Bibcode:1946RSPSB.133..444H. doi:10.1098/rspb.1946.0024. PMID 20281590.
- Süudhof TC (2008). "Neurotransmitter release". Handb Exp Pharmacol. Handbook of Experimental Pharmacology. 184 (184): 1–21. doi:10.1007/978-3-540-74805-2_1. ISBN 978-3-540-74804-5. PMID 18064409.
- Rusakov DA (August 2006). "Ca2+-dependent mechanisms of presynaptic control at central synapses". Neuroscientist. 12 (4): 317–26. doi:10.1177/1073858405284672. PMC . PMID 16840708.
- Humeau Y, Doussau F, Grant NJ, Poulain B (May 2000). "How botulinum and tetanus neurotoxins block neurotransmitter release". Biochimie. 82 (5): 427–46. doi:10.1016/S0300-9084(00)00216-9. PMID 10865130.
- Zoidl G, Dermietzel R (2002). "On the search for the electrical synapse: a glimpse at the future". Cell Tissue Res. 310 (2): 137–42. doi:10.1007/s00441-002-0632-x. PMID 12397368.
- Brink PR, Cronin K, Ramanan SV (1996). "Gap junctions in excitable cells". J. Bioenerg. Biomembr. 28 (4): 351–8. doi:10.1007/BF02110111. PMID 8844332.
- Hirsch NP (July 2007). "Neuromuscular junction in health and disease". Br J Anaesth. 99 (1): 132–8. doi:10.1093/bja/aem144. PMID 17573397.
- Hughes BW, Kusner LL, Kaminski HJ (April 2006). "Molecular architecture of the neuromuscular junction". Muscle Nerve. 33 (4): 445–61. doi:10.1002/mus.20440. PMID 16228970.
- Newmark J (2007). "Nerve agents". Neurologist. 13 (1): 20–32. doi:10.1097/01.nrl.0000252923.04894.53. PMID 17215724.
- Costa LG (2006). "Current issues in organophosphate toxicology". Clin. Chim. Acta. 366 (1–2): 1–13. doi:10.1016/j.cca.2005.10.008. PMID 16337171.
- Kléber AG, Rudy Y (April 2004). "Basic mechanisms of cardiac impulse propagation and associated arrhythmias". Physiol. Rev. 84 (2): 431–88. doi:10.1152/physrev.00025.2003. PMID 15044680.
- Tamargo J, Caballero R, Delpón E (January 2004). "Pharmacological approaches in the treatment of atrial fibrillation". Curr. Med. Chem. 11 (1): 13–28. doi:10.2174/0929867043456241. PMID 14754423.
- Slayman CL, Long WS, Gradmann D (1976). "Action potentials in Neurospora crassa , a mycelial fungus". Biochimica et Biophysica Acta. 426 (4): 737–744. doi:10.1016/0005-2736(76)90138-3. PMID 130926.
- Mummert H, Gradmann D (1991). "Action potentials in Acetabularia: measurement and simulation of voltage-gated fluxes". Journal of Membrane Biology. 124 (3): 265–273. doi:10.1007/BF01994359. PMID 1664861.
- Gradmann D (2001). "Models for oscillations in plants". Austr. J. Plant Physiol. 28: 577–590.
- Beilby MJ (2007). "Action potentials in charophytes". Int. Rev. Cytol. International Review of Cytology. 257: 43–82. doi:10.1016/S0074-7696(07)57002-6. ISBN 978-0-12-373701-4. PMID 17280895.
- Gradmann D, Hoffstadt J (1998). "Electrocoupling of ion transporters in plants: Interaction with internal ion concentrations". Journal of Membrane Biology. 166 (1): 51–59. doi:10.1007/s002329900446. PMID 9784585.
- Fromm J, Lautner S (2007). "Electrical signals and their physiological significance in plants". Plant Cell Environ. 30 (3): 249–257. doi:10.1111/j.1365-3040.2006.01614.x. PMID 17263772.
- Leys SP, Mackie GO, Meech RW (1 May 1999). "Impulse conduction in a sponge". J. Exp. Biol. 202 (9): 1139–50. PMID 10101111.
- Keynes RD (1989). "The role of giant axons in studies of the nerve impulse". BioEssays. 10 (2–3): 90–93. doi:10.1002/bies.950100213. PMID 2541698.
- Meunier C, Segev I (2002). "Playing the devil's advocate: is the Hodgkin-Huxley model useful?". Trends Neurosci. 25 (11): 558–63. doi:10.1016/S0166-2236(02)02278-6. PMID 12392930.
- Cole KS (1949). "Dynamic electrical characteristics of the squid axon membrane". Arch. Sci. Physiol. 3: 253–8.
- Ling G, Gerard RW (1949). "The normal membrane potential of frog sartorius fibers". J. Cell. Comp. Physiol. 34 (3): 383–396. doi:10.1002/jcp.1030340304. PMID 15410483.
- Nastuk WL (1950). "The electrical activity of single muscle fibers". J. Cell. Comp. Physiol. 35: 39–73. doi:10.1002/jcp.1030350105.
- Brock LG, Coombs JS, Eccles JC (1952). "The recording of potentials from motoneurones with an intracellular electrode". J. Physiol. (London). 117: 431–460.
- Ross WN, Salzberg BM, Cohen LB, Davila HV (1974). "A large change in dye absorption during the action potential". Biophysical Journal. 14 (12): 983–986. Bibcode:1974BpJ....14..983R. doi:10.1016/S0006-3495(74)85963-1. PMC . PMID 4429774.
* Grynkiewicz G, Poenie M, Tsien RY (1985). "A new generation of Ca2+ indicators with greatly improved fluorescence properties". J. Biol. Chem. 260 (6): 3440–3450. PMID 3838314.
- Bu G, Adams H, Berbari EJ, Rubart M (March 2009). "Uniform action potential repolarization within the sarcolemma of in situ ventricular cardiomyocytes". Biophys. J. 96 (6): 2532–46. Bibcode:2009BpJ....96.2532B. doi:10.1016/j.bpj.2008.12.3896. PMC . PMID 19289075.
- Nakamura Y, Nakajima S, Grundfest H (1965). "The effect of tetrodotoxin on electrogenic components of squid giant axons". J. Gen. Physiol. 48 (6): 985–996. doi:10.1085/jgp.48.6.975.
* Ritchie JM, Rogart RB (1977). "The binding of saxitoxin and tetrodotoxin to excitable tissue". Rev. Physiol. Biochem. Pharmacol. Reviews of Physiology, Biochemistry and Pharmacology. 79: 1–50. doi:10.1007/BFb0037088. ISBN 0-387-08326-X. PMID 335473.
* Keynes RD, Ritchie JM (1984). "On the binding of labelled saxitoxin to the squid giant axon". Proc. R. Soc. Lond. 239 (1227): 393–434. Bibcode:1984RSPSB.222..147K. doi:10.1098/rspb.1984.0055.
- Piccolino M (1997). "Luigi Galvani and animal electricity: two centuries after the foundation of electrophysiology". Trends in Neuroscience. 20 (10): 443–448. doi:10.1016/S0166-2236(97)01101-6.
- Piccolino M (2000). "The bicentennial of the Voltaic battery (1800–2000): the artificial electric organ". Trends in Neuroscience. 23 (4): 147–151. doi:10.1016/S0166-2236(99)01544-1.
- Bernstein J (1902). "Untersuchungen zur Thermodynamik der bioelektrischen Ströme". Pflüger's Arch. Ges. Physiol. 92 (10–12): 521–562. doi:10.1007/BF01790181.
- Cole KS (1939). "Electrical impedance of the squid giant axon during activity". J. Gen. Physiol. 22 (5): 649–670. doi:10.1085/jgp.22.5.649. PMC . PMID 19873125.
- Lapicque L (1907). "Recherches quantitatives sur l'excitationelectrique des nerfs traitee comme une polarisation". J. Physiol. Pathol. Gen. 9: 620– 635.
- Hodgkin AL, Katz B (1949). "The effect of sodium ions on the electrical activity of the giant axon of the squid". J. Physiology. 108: 37–77. doi:10.1113/jphysiol.1949.sp004310.
- Neher E, Sakmann (1976). "Single-channel currents recorded from membrane of denervated frog muscle fibres". Nature. 260 (5554): 779–802. Bibcode:1976Natur.260..799N. doi:10.1038/260799a0. PMID 1083489.
* Hamill OP (1981). "Improved patch-clamp techniques for high-resolution current recording from cells and cell-free membrane patches". Pflugers Arch. 391 (2): 85–100. doi:10.1007/BF00656997. PMID 6270629.
* Neher E (1992). "The patch clamp technique". Scientific American. 266 (3): 44–51. doi:10.1038/scientificamerican0392-44. PMID 1374932.
- Yellen G (2002). "The voltage-gated potassium channels and their relatives". Nature. 419 (6902): 35–42. doi:10.1038/nature00978. PMID 12214225.
- Doyle DA; Morais Cabral J; Pfuetzner RA; Kuo A; Gulbis JM; Cohen SL; et al. (1998). "The structure of the potassium channel, molecular basis of K+ conduction and selectivity". Science. 280 (5360): 69–77. Bibcode:1998Sci...280...69D. doi:10.1126/science.280.5360.69. PMID 9525859.
* Zhou Y, Morias-Cabrak JH, Kaufman A, MacKinnon R (2001). "Chemistry of ion coordination and hydration revealed by a K+-Fab complex at 2.0 A resolution". Nature. 414 (6859): 43–48. Bibcode:2001Natur.414...43Z. doi:10.1038/35102009. PMID 11689936.
* Jiang Y, Lee A, Chen J, Ruta V, Cadene M, Chait BT, MacKinnon R (2003). "X-ray structure of a voltage-dependent K+ channel". Nature. 423 (6935): 33–41. Bibcode:2003Natur.423...33J. doi:10.1038/nature01580. PMID 12721618.
- Cha A, Snyder GE, Selvin PR, Bezanilla F (1999). "Atomic-scale movement of the voltage-sensing region in a potassium channel measured via spectroscopy". Nature. 402 (6763): 809–813. doi:10.1038/45552. PMID 10617201.
* Glauner KS, Mannuzzu LM, Gandhi CS, Isacoff E (1999). "Spectroscopic mapping of voltage sensor movement in the Shaker potassium channel". Nature. 402 (6763): 813–817. Bibcode:1999Natur.402..813G. doi:10.1038/45561. PMID 10617202.
* Bezanilla F (2000). "The voltage sensor in voltage-dependent ion channels". Physiol. Rev. 80 (2): 555–592. PMID 10747201.
- Catterall WA (2001). "A 3D view of sodium channels". Nature. 409 (6823): 988–999. Bibcode:2001Natur.409..988C. doi:10.1038/35059188. PMID 11234048.
* Sato C; Ueno Y; Asai K; Takahashi K; Sato M; Engel A; et al. (2001). "The voltage-sensitive sodium channel is a bell-shaped molecule with several cavities". Nature. 409 (6823): 1047–1051. Bibcode:2001Natur.409.1047S. doi:10.1038/35059098. PMID 11234014.
- Skou J (1957). "The influence of some cations on an adenosine triphosphatase from peripheral nerves". Biochim Biophys Acta. 23 (2): 394–401. doi:10.1016/0006-3002(57)90343-8. PMID 13412736.
- Hodgkin AL, Keynes (1955). "Active transport of cations in giant axons from Sepia and Loligo". J. Physiol. 128 (1): 28–60. doi:10.1113/jphysiol.1955.sp005290. PMC . PMID 14368574.
- Caldwell PC, Hodgkin, Keynes, Shaw (1960). "The effects of injecting energy-rich phosphate compounds on the active transport of ions in the giant axons of Loligo". J. Physiol. 152 (3): 561–90. PMC . PMID 13806926.
- Caldwell PC, Keynes RD (1957). "The utilization of phosphate bond energy for sodium extrusion from giant axons". J. Physiol. (London). 137 (1): 12–13P. PMID 13439598.
- Morth JP, Pedersen PB, Toustrup-Jensen MS, Soerensen TL, Petersen J, Andersen JP, Vilsen B, Nissen P (2007). "Crystal structure of the sodium–potassium pump". Nature. 450 (7172): 1043–1049. Bibcode:2007Natur.450.1043M. doi:10.1038/nature06419. PMID 18075585.
- Lee AG, East JM (2001). "What the structure of a calcium pump tells us about its mechanism". Biochemical Journal. 356 (Pt 3): 665–683. doi:10.1042/0264-6021:3560665. PMC . PMID 11389676.
- * FitzHugh R (1960). "Thresholds and plateaus in the Hodgkin-Huxley nerve equations". J. Gen. Physiol. 43 (5): 867–896. doi:10.1085/jgp.43.5.867. PMC . PMID 13823315.
* Kepler TB, Abbott LF, Marder E (1992). "Reduction of conductance-based neuron models". Biological Cybernetics. 66 (5): 381–387. doi:10.1007/BF00197717. PMID 1562643.
- Morris C, Lecar H (1981). "Voltage oscillations in the barnacle giant muscle fiber". Biophysical Journal. 35 (1): 193–213. Bibcode:1981BpJ....35..193M. doi:10.1016/S0006-3495(81)84782-0. PMC . PMID 7260316.
- FitzHugh R (1961). "Impulses and physiological states in theoretical models of nerve membrane". Biophysical Journal. 1 (6): 445–466. Bibcode:1961BpJ.....1..445F. doi:10.1016/S0006-3495(61)86902-6. PMC . PMID 19431309.
* Nagumo J, Arimoto S, Yoshizawa S (1962). "An active pulse transmission line simulating nerve axon". Proceedings of the IRE. 50 (10): 2061–2070. doi:10.1109/JRPROC.1962.288235.
- Bonhoeffer KF (1948). "Activation of Passive Iron as a Model for the Excitation of Nerve". J. Gen. Physiol. 32 (1): 69–91. doi:10.1085/jgp.32.1.69. PMC . PMID 18885679.
* Bonhoeffer KF (1953). "Modelle der Nervenerregung". Naturwissenschaften. 40 (11): 301–311. Bibcode:1953NW.....40..301B. doi:10.1007/BF00632438.
* van der Pol B (1926). "On relaxation-oscillations". Philosophical Magazine. 2: 977–992.
* van der Pol B, van der Mark J (1928). "The heartbeat considered as a relaxation oscillation, and an electrical model of the heart". Philosophical Magazine. 6: 763–775. doi:10.1080/14786441108564652.
* van der Pol B, van der Mark J (1929). "The heartbeat considered as a relaxation oscillation, and an electrical model of the heart". Arch. Neerl. Physiol. 14: 418–443.
- Evans JW (1972). "Nerve axon equations. I. Linear approximations". Indiana U. Math. Journal. 21 (9): 877–885. doi:10.1512/iumj.1972.21.21071.
* Evans JW, Feroe J (1977). "Local stability theory of the nerve impulse". Math. Biosci. 37: 23–50. doi:10.1016/0025-5564(77)90076-1.
- Keener JP (1983). "Analogue circuitry for the van der Pol and FitzHugh-Nagumo equations". IEEE Trans. on Systems, Man and Cybernetics. 13 (5): 1010–1014. doi:10.1109/TSMC.1983.6313098.
- Hooper SL (March 2000). "Central pattern generators". Curr. Biol. 10 (5): R176. CiteSeerX . doi:10.1016/S0960-9822(00)00367-5. PMID 10713861.
- "FitzHugh-Nagumo model". Retrieved 24 May 2014.
- "The Nobel Prize in Physiology or Medicine 1963" (Press release). The Royal Swedish Academy of Science. 1963. Retrieved 2010-02-21.
- "The Nobel Prize in Physiology or Medicine 1991" (Press release). The Royal Swedish Academy of Science. 1991. Retrieved 2010-02-21.
- "The Nobel Prize in Physiology or Medicine 1906" (Press release). The Royal Swedish Academy of Science. 1906. Retrieved 2010-02-21.
- Warlow, Charles. "The Recent Evolution of a Symbiotic Ion Channel in the Legume Family Altered Ion Conductance and Improved Functionality in Calcium Signaling". BMJ Publishing Group. Retrieved 23 March 2013.
- "The Nobel Prize in Chemistry 1997" (Press release). The Royal Swedish Academy of Science. 1997. Retrieved 2010-02-21.
- Aidley DJ, Stanfield PR (1996). Ion Channels: Molecules in Action. Cambridge: Cambridge University Press. ISBN 978-0-521-49882-1.
- Bear MF, Connors BW, Paradiso MA (2001). Neuroscience: Exploring the Brain. Baltimore: Lippincott. ISBN 0-7817-3944-6.
- Clay JR (May 2005). "Axonal excitability revisited". Prog Biophys Mol Biol. 88 (1): 59–90. doi:10.1016/j.pbiomolbio.2003.12.004. PMID 15561301.
- Deutsch S, Micheli-Tzanakou E (1987). Neuroelectric Systems. New York: New York University Press. ISBN 0-8147-1782-9.
- Hille B (2001). Ion Channels of Excitable Membranes (3rd ed.). Sunderland, MA: Sinauer Associates. ISBN 978-0-87893-321-1.
- Johnston D; Wu SM-S (1995). Foundations of Cellular Neurophysiology. Cambridge, MA: Bradford Book, The MIT Press. ISBN 0-262-10053-3.
- Kandel ER, Schwartz JH, Jessell TM (2000). Principles of Neural Science (4th ed.). New York: McGraw-Hill. ISBN 0-8385-7701-6.
- Miller C (1987). "How ion channel proteins work". In LK Kaczmarek; IB Levitan. Neuromodulation: The Biochemical Control of Neuronal Excitability. New York: Oxford University Press. pp. 39–63. ISBN 978-0-19-504097-5.
- Nelson DL, Cox MM (2008). Lehninger Principles of Biochemistry (5th ed.). New York: W. H. Freeman. ISBN 978-0-7167-7108-1.
- Ionic flow in action potentials at Blackwell Publishing
- Action potential propagation in myelinated and unmyelinated axons at Blackwell Publishing
- Generation of AP in cardiac cells and generation of AP in neuron cells
- Resting membrane potential from Life: The Science of Biology, by WK Purves, D Sadava, GH Orians, and HC Heller, 8th edition, New York: WH Freeman, ISBN 978-0-7167-7671-0.
- Ionic motion and the Goldman voltage for arbitrary ionic concentrations at The University of Arizona
- A cartoon illustrating the action potential
- Action potential propagation
- Production of the action potential: voltage and current clamping simulations[permanent dead link]
- Open-source software to simulate neuronal and cardiac action potentials at SourceForge.net
- Introduction to the Action Potential, Neuroscience Online (electronic neuroscience textbook by UT Houston Medical School)
|
Many music learners will speak of music theory with bad memories clouding their judgement. Day after day reciting scales, notes, and most of all… the dreaded mnemonics! But simply put, music theory is important and actually really helpful for musicians!
The circle of fifths is a powerful and fundamental concept in music theory that is pinnacle for playing and composition. The circle helps musicians understand the relationships between key signatures, chords, and scales. Whether you're a seasoned musician or just beginning your musical journey, delving into the circle of fifths can enhance your understanding of music and open up new avenues for creativity. In this blog post, we'll take a deep dive into the circle of fifths, exploring its structure and practical applications.
The Structure of the Circle of Fifths:
The circle of fifths is a circular diagram that arranges all 12 major and minor keys in a specific order. It starts with C major at the top as it is considered the ‘neutral key’ as there are no sharps or flats within the diatonic chords within this key. Each key is positioned a perfect fifth away from the previous.
Whereas, if you rotate the circle counter-clockwise, a flat is added with each key. Making the pitch of each key a perfect fourth higher than the previous key.
7 Ways to Practically Apply the Circle of Fifths to your Playing:
As mentioned above, the circle of fifths clearly displays the key signatures for each key. By examining the number of sharps or flats in a composition, musicians can quickly identify the key in which it is written. This is particularly helpful for performers, as it allows them to prepare for the specific scales and key-related elements they'll encounter in a piece.
The Circle of Fifths provides valuable insights into chord progressions within a given key. Musicians often refer to the primary chords in a key as the I, IV, and V chords. For example, in the key of C major, these chords would be C major (I), F major (IV), and G major (V). Understanding these relationships aids in composing and improvising chord progressions that sound harmonically pleasing.
Transposing a piece of music involves changing the key while maintaining its overall structure. The circle of fifths is immensely useful for this task. Musicians can simply move around the circle to find a new key, understanding the required sharps or flats and their positions. This is essential for accommodating different instruments or vocal ranges or adapting a piece to suit a particular performer's preference.
You are jamming with another musician, and suddenly they say “the verse is 1-4-5 in A”, you will know that it is A, D and E!
Modulation refers to a deliberate key change within a composition. Musicians use the circle of fifths to plan and execute smooth modulations. By selecting a key with a closely related key signature, the transition between keys feels natural to the listener. This technique is prevalent in classical music but is also used in various contemporary genres.
The circle of fifths is a helpful tool for constructing scales. By starting with the tonic note of a key (e.g., C for C major), musicians can follow the pattern of intervals around the circle to determine the notes that make up the major scale. This method applies to both major and minor scales and is a fundamental skill for improvisation and composition.
Songwriting and Composition:
The circle of fifths is a valuable tool for songwriters and composers looking to create engaging melodies and harmonies. It offers a roadmap for exploring different keys and chord progressions, allowing for the development of memorable and emotionally resonant musical compositions.
Following on from composition, musicians and composers use the circle of fifths to explore harmonic relationships between keys. For instance, moving from one key to another that is a fifth away can create a sense of tension and resolution. This progression is commonly used to build suspense in music before resolving to a more stable key. Understanding these relationships adds depth and sophistication to compositions.
Noisy Clan’s Decoder: Circle of Fifths
Now we understand that this is not only a condensed version of a centuries old musical concept, but it is also a LOT of information to really digest. So that's where a product design company comes in and designs a product (by musicians) for musicians. This is our Decoder: Circle of Fifths. Designed to simply focus on one key at a time, eliminating all the extra ‘noise’ of a regular circle of fifths diagram.
Rotate the wheel either clockwise or counterclockwise to your desired key.
To maximise our funky fidget spinner, the back also showcases 12 different chord progressions for you to start composing your own music!
So, when you start to Decode(r) the circle of fifths, it is not as theoretically ‘boring’ as one might have thought. Whether you're a learner, composer or a performer, a solid grasp on the circle of fifths can significantly enhance your musical abilities and creativity. Noisy Clan has set their mission to help musicians of all ages and abilities start to comprehend the concept. We believe that music theory should be learnt and understood, but not feared or avoided. Now go forth! Take the fear out of music theory, and PLAY MORE!
|
Above, Martian rampart crater, classified as a multi-layered ejecta crater. These features were caused by the collision of meteors (consisting of large fragments of asteroids) or comets (consisting of ice, dust particles and rocky fragments) with the Earth. Asteroid impact date: Estimated 145 million years ago Some craters on volcanically active bodies are volcanic in origin. Impact Craters. A crater is a bowl-shaped depression, or hollowed-out area, produced by the impact of a meteorite, volcanic activity, or an explosion. Craters are formed the same way but I'll provide an answer for the moon and for planetary bodies just for fun! This list of impact craters on Earth contains a selection of the 190 confirmed craters given in the Earth Impact Database. ... Impact Craters. To keep the lists manageable, only the largest craters within a time period are included. In the beginning debate about meteorite craters (about one hundred years ago), astronomers believed the many craters on the Moon were volcanic. Then, elongated craters may be formed, and the ejecta blanket may considerably deviate from a circular symmetry.
Impact Craters: Created by Space Debris Throughout its existence, the Moon has been bombarded by comets and asteroid chunks, and those created the many impact craters we see today. Specs: This impact crater formed what is now Lake Manicouagan. Credit: NASA Rampart craters are, thus far, limited to Martian craters and, based on a recent paper, The Reis Impact Structure in Germany. 6. Craters come in two flavors: those that aren’t caused by asteroids or comets, impact craters, are formed by powerful volcanic explosions. Impact craters are geologic structures formed when a large meteoroid, asteroid or comet smashes into a planet or a satellite. Craters produced by the collision of a meteorite with the Earth (or another planet or moon) are called impact craters. Impact craters are made when there is a big crash of a meteor with the Earth. Impact craters that are formed under those conditions should be limited in size by this surface strength, which reduces the expected crater diameter compared with a strengthless surface . Morokweng Crater. How are Craters Formed? They are in pretty much the same shape they were after they were created. Crater formation is a a mysterious science, and one we won't get a good handle on until man returns to the pripristine craters of thelunar landscape. ... Fun Facts about Craters.
How Are Impact Craters Formed It’s no surprise that the four ‘rocky’ planets of the Solar System – Mercury, Venus, Earth and Mars – feature their fair share of impact craters.
All the inner bodies in our solar system have been heavily bombarded by meteoroids throughout their history. But craters aren’t just formed from meteors, they can also come from volcanic activity, or even an explosion. Confirmed impact craters listed by size and age. The complete list is divided into separate articles by geographical region. The high-speed impact of a large meteorite compresses, or forces downward, a wide area of rock. In this activity, students will simulate meteorites crashing into the surface of the moon to determine the factors affecting the appearance of impact craters and ejecta.
Thanks for the A2A. Even with erosion, it’s considered one of the largest and best-preserved craters on Earth, with an estimated diameter of 62 miles (100 kilometers).
Some of them are so large that they cannot be recognized as craters except from orbit. - CRATERS - IMPACT CRATERS, which occur almost everywhere on the Martian surface, are significant because the number of impact craters per unit area gives an indication of the relative ages of different parts of the surface.They also provide clues to the properties of the near-surface materials and record the effects of various processes, such as wind action, that modify the surface.
|
The gradient is a fancy word for derivative, or the rate of change of a function. It’s a vector (a direction to move) that
- Points in the direction of greatest increase of a function (intuition on why)
- Is zero at a local maximum or local minimum (because there is no single direction of increase)
The term "gradient" is typically used for functions with several inputs and a single output (a scalar field). Yes, you can say a line has a gradient (its slope), but using "gradient" for single-variable functions is unnecessarily confusing. Keep it simple.
“Gradient” can refer to gradual changes of color, but we’ll stick to the math definition if that’s ok with you. You’ll see the meanings are related.
Table of Contents
Properties of the Gradient
Now that we know the gradient is the derivative of a multi-variable function, let’s derive some properties.
The regular, plain-old derivative gives us the rate of change of a single variable, usually x. For example, dF/dx tells us how much the function F changes for a change in x. But if a function takes multiple variables, such as x and y, it will have multiple derivatives: the value of the function will change when we “wiggle” x (dF/dx) and when we wiggle y (dF/dy).
We can represent these multiple rates of change in a vector, with one component for each derivative. Thus, a function that takes 3 variables will have a gradient with 3 components:
- F(x) has one variable and a single derivative: dF/dx
- F(x,y,z) has three variables and three derivatives: (dF/dx, dF/dy, dF/dz)
The gradient of a multi-variable function has a component for each direction.
And just like the regular derivative, the gradient points in the direction of greatest increase (the following article explains why). However, now that we have multiple directions to consider (x, y and z), the direction of greatest increase is no longer simply “forward” or “backward” along the x-axis, like it is with functions of a single variable.
If we have two variables, then our 2-component gradient can specify any direction on a plane. Likewise, with 3 variables, the gradient can specify and direction in 3D space to move to increase our function.
A Twisted Example
I’m a big fan of examples to help solidify an explanation. Suppose we have a magical oven, with coordinates written on it and a special display screen:
We can type any 3 coordinates (like “3,5,2″) and the display shows us the gradient of the temperature at that point.
The microwave also comes with a convenient clock. Unfortunately, the clock comes at a price — the temperature inside the microwave varies drastically from location to location. But this was well worth it: we really wanted that clock.
With me so far? We type in any coordinate, and the microwave spits out the gradient at that location.
Be careful not to confuse the coordinates and the gradient. The coordinates are the current location, measured on the x-y-z axis. The gradient is a direction to move from our current location, such as move up, down, left or right.
Now suppose we are in need of psychiatric help and put the Pillsbury Dough Boy inside the oven because we think he would taste good. He’s made of cookie dough, right? We place him in a random location inside the oven, and our goal is to cook him as fast as possible. The gradient can help!
The gradient at any location points in the direction of greatest increase of a function. In this case, our function measures temperature. So, the gradient tells us which direction to move the doughboy to get him to a location with a higher temperature, to cook him even faster. Remember that the gradient does not give us the coordinates of where to go; it gives us the direction to move to increase our temperature.
Thus, we would start at a random point like (3,5,2) and check the gradient. In this case, the gradient there is (3,4,5). Now, we wouldn’t actually move an entire 3 units to the right, 4 units back, and 5 units up. The gradient is just a direction, so we’d follow this trajectory for a tiny bit, and then check the gradient again.
We get to a new point, pretty close to our original, which has its own gradient. This new gradient is the new best direction to follow. We’d keep repeating this process: move a bit in the gradient direction, check the gradient, and move a bit in the new gradient direction. Every time we nudged along and follow the gradient, we’d get to a warmer and warmer location.
Eventually, we’d get to the hottest part of the oven and that’s where we’d stay, about to enjoy our fresh cookies.
But before you eat those cookies, let’s make some observations about the gradient. That’s more fun, right?
First, when we reach the hottest point in the oven, what is the gradient there?
Zero. Nada. Zilch. Why? Well, once you are at the maximum location, there is no direction of greatest increase. Any direction you follow will lead to a decrease in temperature. It’s like being at the top of a mountain: any direction you move is downhill. A zero gradient tells you to stay put – you are at the max of the function, and can’t do better.
But what if there are two nearby maximums, like two mountains next to each other? You could be at the top of one mountain, but have a bigger peak next to you. In order to get to the highest point, you have to go downhill first.
Ah, now we are venturing into the not-so-pretty underbelly of the gradient. Finding the maximum in regular (single variable) functions means we find all the places where the derivative is zero: there is no direction of greatest increase. If you recall, the regular derivative will point to local minimums and maximums, and the absolute max/min must be tested from these candidate locations.
The same principle applies to the gradient, a generalization of the derivative. You must find multiple locations where the gradient is zero — you’ll have to test these points to see which one is the global maximum. Again, the top of each hill has a zero gradient — you need to compare the height at each to see which one is higher. Now that we have cleared that up, go enjoy your cookie.
We know the definition of the gradient: a derivative for each variable of a function. The gradient symbol is usually an upside-down delta, and called “del” (this makes a bit of sense – delta indicates change in one variable, and the gradient is the change in for all variables). Taking our group of 3 derivatives above
Notice how the x-component of the gradient is the partial derivative with respect to x (similar for y and z). For a one variable function, there is no y-component at all, so the gradient reduces to the derivative.
Also, notice how the gradient can itself be a function!
If we want to find the direction to move to increase our function the fastest, we plug in our current coordinates (such as 3,4,5) into the equation and get:
So, this new vector (1, 8, 75) would be the direction we’d move in to increase the value of our function. In this case, our x-component doesn’t add much to the value of the function: the partial derivative is always 1.
Obvious applications of the gradient are finding the max/min of multivariable functions. Another less obvious but related application is finding the maximum of a constrained function: a function whose x and y values have to lie in a certain domain, i.e. find the maximum of all points constrained to lie along a circle. Solving this calls for my boy Lagrange, but all in due time, all in due time: enjoy the gradient for now.
The key insight is to recognize the gradient as the generalization of the derivative. The gradient points to the direction of greatest increase; keeping following the gradient, and you will reach the local maximum.
Why is the gradient perpendicular to lines of equal potential?
Lines of equal potential (“equipotential”) are the points with the same energy (or value for F(x,y,z)). In the simplest case, a circle represents all items the same distance from the center.
The gradient represents the direction of greatest change. If it had any component along the line of equipotential, then that energy would be wasted (as it’s moving closer to a point at the same energy). When the gradient is perpendicular to the equipotential points, it is moving as far from them as possible (this article explains why the gradient is the direction of greatest increase — it’s the direction that maximizes the varying tradeoffs inside a circle).
Other Posts In This Series
- Vector Calculus: Understanding the Dot Product
- Vector Calculus: Understanding the Cross Product
- Vector Calculus: Understanding Flux
- Vector Calculus: Understanding Divergence
- Vector Calculus: Understanding Circulation and Curl
- Vector Calculus: Understanding the Gradient
- Understanding Pythagorean Distance and the Gradient
|
Diffraction is the interference or bending of waves around the corners of an obstacle or through an aperture into the region of geometrical shadow of the obstacle/aperture. The diffracting object or aperture effectively becomes a secondary source of the propagating wave. Italian scientist Francesco Maria Grimaldi coined the word diffraction and was the first to record accurate observations of the phenomenon in 1660.
In classical physics, the diffraction phenomenon is described by the Huygens–Fresnel principle that treats each point in a propagating wavefront as a collection of individual spherical wavelets. The characteristic bending pattern is most pronounced when a wave from a coherent source (such as a laser) encounters a slit/aperture that is comparable in size to its wavelength, as shown in the inserted image. This is due to the addition, or interference, of different points on the wavefront (or, equivalently, each wavelet) that travel by paths of different lengths to the registering surface. If there are multiple, closely spaced openings (e.g., a diffraction grating), a complex pattern of varying intensity can result.
These effects also occur when a light wave travels through a medium with a varying refractive index, or when a sound wave travels through a medium with varying acoustic impedance – all waves diffract, including gravitational waves, water waves, and other electromagnetic waves such as X-rays and radio waves. Furthermore, quantum mechanics also demonstrates that matter possesses wave-like properties and, therefore, undergoes diffraction (which is measurable at subatomic to molecular levels).
The amount of diffraction depends on the size of the gap. Diffraction is greatest when the size of the gap is similar to the wavelength of the wave. In this case, when the waves pass through the gap they become semi-circular.
The effects of diffraction of light were first carefully observed and characterized by Francesco Maria Grimaldi, who also coined the term diffraction, from the Latin diffringere, 'to break into pieces', referring to light breaking up into different directions. The results of Grimaldi's observations were published posthumously in 1665. Isaac Newton studied these effects and attributed them to inflexion of light rays. James Gregory (1638–1675) observed the diffraction patterns caused by a bird feather, which was effectively the first diffraction grating to be discovered. Thomas Young performed a celebrated experiment in 1803 demonstrating interference from two closely spaced slits. Explaining his results by interference of the waves emanating from the two different slits, he deduced that light must propagate as waves. Augustin-Jean Fresnel did more definitive studies and calculations of diffraction, made public in 1816 and 1818, and thereby gave great support to the wave theory of light that had been advanced by Christiaan Huygens and reinvigorated by Young, against Newton's particle theory.
In classical physics diffraction arises because of the way in which waves propagate; this is described by the Huygens–Fresnel principle and the principle of superposition of waves. The propagation of a wave can be visualized by considering every particle of the transmitted medium on a wavefront as a point source for a secondary spherical wave. The wave displacement at any subsequent point is the sum of these secondary waves. When waves are added together, their sum is determined by the relative phases as well as the amplitudes of the individual waves so that the summed amplitude of the waves can have any value between zero and the sum of the individual amplitudes. Hence, diffraction patterns usually have a series of maxima and minima.
In the modern quantum mechanical understanding of light propagation through a slit (or slits) every photon is described by its wavefunction that determines the probability distribution for the photon: the light and dark bands are the areas where the photons are more or less likely to be detected. The wavefunction is determined by the physical surroundings such as slit geometry, screen distance and initial conditions when the photon is created. The wave nature of individual photons (as opposed to wave properties only arising from the interactions between multitudes of photons) was implied by a low-intensity double-slit experiment first performed by G. I. Taylor in 1909. The quantum approach has some striking similarities to the Huygens-Fresnel principle; based on that principle, as light travels through slits and boundaries, secondary point light sources are created near or along these obstacles, and the resulting diffraction pattern is going to be the intensity profile based on the collective interference of all these light sources that have different optical paths. In the quantum formalism, that is similar to considering the limited regions around the slits and boundaries from which photons are more likely to originate, and calculating the probability distribution (that is proportional to the resulting intensity of classical formalism).
There are various analytical models which allow the diffracted field to be calculated, including the Kirchhoff-Fresnel diffraction equation (derived from the wave equation), the Fraunhofer diffraction approximation of the Kirchhoff equation (applicable to the far field), the Fresnel diffraction approximation (applicable to the near field) and the Feynman path integral formulation. Most configurations cannot be solved analytically, but can yield numerical solutions through finite element and boundary element methods.
It is possible to obtain a qualitative understanding of many diffraction phenomena by considering how the relative phases of the individual secondary wave sources vary, and, in particular, the conditions in which the phase difference equals half a cycle in which case waves will cancel one another out.
The simplest descriptions of diffraction are those in which the situation can be reduced to a two-dimensional problem. For water waves, this is already the case; water waves propagate only on the surface of the water. For light, we can often neglect one direction if the diffracting object extends in that direction over a distance far greater than the wavelength. In the case of light shining through small circular holes we will have to take into account the full three-dimensional nature of the problem.
Computer-generated intensity pattern formed on a screen by diffraction from a square aperture.
Generation of an interference pattern from two-slit diffraction.
Computational model of an interference pattern from two-slit diffraction.
Optical diffraction pattern ( laser), (analogous to X-ray crystallography)
The effects of diffraction are often seen in everyday life. The most striking examples of diffraction are those that involve light; for example, the closely spaced tracks on a CD or DVD act as a diffraction grating to form the familiar rainbow pattern seen when looking at a disc.
This principle can be extended to engineer a grating with a structure such that it will produce any diffraction pattern desired; the hologram on a credit card is an example.
Diffraction in the atmosphere by small particles can cause a bright ring to be visible around a bright light source like the sun or the moon.
A shadow of a solid object, using light from a compact source, shows small fringes near its edges.
The speckle pattern which is observed when laser light falls on an optically rough surface is also a diffraction phenomenon. When deli meat appears to be iridescent, that is diffraction off the meat fibers. All these effects are a consequence of the fact that light propagates as a wave.
Diffraction can occur with any kind of wave. Ocean waves diffract around jetties and other obstacles.
Sound waves can diffract around objects, which is why one can still hear someone calling even when hiding behind a tree.
Diffraction can also be a concern in some technical applications; it sets a fundamental limit to the resolution of a camera, telescope, or microscope.
Other examples of diffraction are considered below.
A long slit of infinitesimal width which is illuminated by light diffracts the light into a series of circular waves and the wavefront which emerges from the slit is a cylindrical wave of uniform intensity, in accordance with the Huygens–Fresnel principle.
An illuminated slit that is wider than a wavelength produces interference effects in the space downstream of the slit. Assuming that the slit behaves as though it has a large number of point sources spaced evenly across the width of the slit interference effects can be calculated. The analysis of this system is simplified if we consider light of a single wavelength. If the incident light is coherent, these sources all have the same phase. Light incident at a given point in the space downstream of the slit is made up of contributions from each of these point sources and if the relative phases of these contributions vary by or more, we may expect to find minima and maxima in the diffracted light. Such phase differences are caused by differences in the path lengths over which contributing rays reach the point from the slit.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. The light from a source located at the top edge of the slit interferes destructively with a source located at the middle of the slit, when the path difference between them is equal to . Similarly, the source just below the top of the slit will interfere destructively with the source located just below the middle of the slit at the same angle. We can continue this reasoning along the entire height of the slit to conclude that the condition for destructive interference for the entire slit is the same as the condition for destructive interference between two narrow slits a distance apart that is half the width of the slit. The path difference is approximately so that the minimum intensity occurs at an angle given by
A similar argument can be used to show that if we imagine the slit to be divided into four, six, eight parts, etc., minima are obtained at angles given by
From the intensity profile above, if , the intensity will have little dependency on , hence the wavefront emerging from the slit would resemble a cylindrical wave with azimuthal symmetry; If , only would have appreciable intensity, hence the wavefront emerging from the slit would resemble that of geometrical optics.
When the incident angle of the light onto the slit is non-zero (which causes a change in the path length), the intensity profile in the Fraunhofer regime (i.e. far field) becomes:
The choice of plus/minus sign depends on the definition of the incident angle .
A diffraction grating is an optical component with a regular pattern. The form of the light diffracted by a grating depends on the structure of the elements and the number of elements present, but all gratings have intensity maxima at angles θm which are given by the grating equation
The light diffracted by a grating is found by summing the light diffracted from each of the elements, and is essentially a convolution of diffraction and interference patterns.
The figure shows the light diffracted by 2-element and 5-element gratings where the grating spacings are the same; it can be seen that the maxima are in the same position, but the detailed structures of the intensities are different.
(See del in cylindrical and spherical coordinates.) By direct substitution, the solution to this equation can be readily shown to be the scalar Green's function, which in the spherical coordinate system (and using the physics time convention ) is
This solution assumes that the delta function source is located at the origin. If the source is located at an arbitrary source point, denoted by the vector and the field point is located at the point , then we may represent the scalar Green's function (for arbitrary source location) as
Therefore, if an electric field is incident on the aperture, the field produced by this aperture distribution is given by the surface integral
where the source point in the aperture is given by the vector
In the far field, wherein the parallel rays approximation can be employed, the Green's function,
The expression for the far-zone (Fraunhofer region) field becomes
In the far-field / Fraunhofer region, this becomes the spatial Fourier transform of the aperture distribution. Huygens' principle when applied to an aperture simply says that the far-field diffraction pattern is the spatial Fourier transform of the aperture shape, and this is a direct by-product of using the parallel-rays approximation, which is identical to doing a plane wave decomposition of the aperture plane fields (see Fourier optics).
Propagation of a laser beam
The way in which the beam profile of a laser beam changes as it propagates is determined by diffraction. When the entire emitted beam has a planar, spatially coherent wave front, it approximates Gaussian beam profile and has the lowest divergence for a given diameter. The smaller the output beam, the quicker it diverges. It is possible to reduce the divergence of a laser beam by first expanding it with one convex lens, and then collimating it with a second convex lens whose focal point is coincident with that of the first lens. The resulting beam has a larger diameter, and hence a lower divergence. Divergence of a laser beam may be reduced below the diffraction of a Gaussian beam or even reversed to convergence if the refractive index of the propagation media increases with the light intensity. This may result in a self-focusing effect.
When the wave front of the emitted beam has perturbations, only the transverse coherence length (where the wave front perturbation is less than 1/4 of the wavelength) should be considered as a Gaussian beam diameter when determining the divergence of the laser beam. If the transverse coherence length in the vertical direction is higher than in horizontal, the laser beam divergence will be lower in the vertical direction than in the horizontal.
The ability of an imaging system to resolve detail is ultimately limited by diffraction. This is because a plane wave incident on a circular lens or mirror is diffracted as described above. The light is not focused to a point but forms an Airy disk having a central spot in the focal plane whose radius (as measured to the first null) is
Two point sources will each produce an Airy pattern – see the photo of a binary star. As the point sources move closer together, the patterns will start to overlap, and ultimately they will merge to form a single pattern, in which case the two point sources cannot be resolved in the image. The Rayleigh criterion specifies that two point sources are considered "resolved" if the separation of the two images is at least the radius of the Airy disk, i.e. if the first minimum of one coincides with the maximum of the other.
Thus, the larger the aperture of the lens compared to the wavelength, the finer the resolution of an imaging system. This is one reason astronomical telescopes require large objectives, and why microscope objectives require a large numerical aperture (large aperture diameter compared to working distance) in order to obtain the highest possible resolution.
The speckle pattern seen when using a laser pointer is another diffraction phenomenon. It is a result of the superposition of many waves with different phases, which are produced when a laser beam illuminates a rough surface. They add together to give a resultant wave whose amplitude, and therefore intensity, varies randomly.
Babinet's principle is a useful theorem stating that the diffraction pattern from an opaque body is identical to that from a hole of the same size and shape, but with differing intensities. This means that the interference conditions of a single obstruction would be the same as that of a single slit.
The knife-edge effect or knife-edge diffraction is a truncation of a portion of the incident radiation that strikes a sharp well-defined obstacle, such as a mountain range or the wall of a building. The knife-edge effect is explained by the Huygens–Fresnel principle, which states that a well-defined obstruction to an electromagnetic wave acts as a secondary source, and creates a new wavefront. This new wavefront propagates into the geometric shadow area of the obstacle.
Knife-edge diffraction is an outgrowth of the "half-plane problem", originally solved by Arnold Sommerfeld using a plane wave spectrum formulation. A generalization of the half-plane problem is the "wedge problem", solvable as a boundary value problem in cylindrical coordinates. The solution in cylindrical coordinates was then extended to the optical regime by Joseph B. Keller, who introduced the notion of diffraction coefficients through his geometrical theory of diffraction (GTD). Pathak and Kouyoumjian extended the (singular) Keller coefficients via the uniform theory of diffraction (UTD).
Diffraction on a sharp metallic edge
Diffraction on a soft aperture, with a gradient of conductivity over the image width
Several qualitative observations can be made of diffraction in general:
- The angular spacing of the features in the diffraction pattern is inversely proportional to the dimensions of the object causing the diffraction. In other words: The smaller the diffracting object, the 'wider' the resulting diffraction pattern, and vice versa. (More precisely, this is true of the sines of the angles.)
- The diffraction angles are invariant under scaling; that is, they depend only on the ratio of the wavelength to the size of the diffracting object.
- When the diffracting object has a periodic structure, for example in a diffraction grating, the features generally become sharper. The third figure, for example, shows a comparison of a double-slit pattern with a pattern formed by five slits, both sets of slits having the same spacing, between the center of one slit and the next.
Matter wave diffraction
According to quantum theory every particle exhibits wave properties and can therefore diffract. Diffraction of electrons and neutrons is one of the powerful arguments in favor of quantum mechanics. The wavelength associated with a particle is the de Broglie wavelength
Diffraction of matter waves has been observed for small particles, like electrons, neutrons, atoms, and even large molecules. The short wavelength of these matter waves makes them ideally suited to study the atomic crystal structure of solids, small molecules and proteins.
Diffraction from a large three-dimensional periodic structure such as many thousands of atoms in a crystal is called Bragg diffraction. It is similar to what occurs when waves are scattered from a diffraction grating. Bragg diffraction is a consequence of interference between waves reflecting from many different crystal planes. The condition of constructive interference is given by Bragg's law:
Bragg diffraction may be carried out using either electromagnetic radiation of very short wavelength like X-rays or matter waves like neutrons (and electrons) whose wavelength is on the order of (or much smaller than) the atomic spacing. The pattern produced gives information of the separations of crystallographic planes , allowing one to deduce the crystal structure.
For completeness, Bragg diffraction is a limit for a large number of atoms with X-rays or neutrons, and is rarely valid for electron diffraction or with solid particles in the size range of less than 50 nanometers.
The description of diffraction relies on the interference of waves emanating from the same source taking different paths to the same point on a screen. In this description, the difference in phase between waves that took different paths is only dependent on the effective path length. This does not take into account the fact that waves that arrive at the screen at the same time were emitted by the source at different times. The initial phase with which the source emits waves can change over time in an unpredictable way. This means that waves emitted by the source at times that are too far apart can no longer form a constant interference pattern since the relation between their phases is no longer time independent.: 919
The length over which the phase in a beam of light is correlated is called the coherence length. In order for interference to occur, the path length difference must be smaller than the coherence length. This is sometimes referred to as spectral coherence, as it is related to the presence of different frequency components in the wave. In the case of light emitted by an atomic transition, the coherence length is related to the lifetime of the excited state from which the atom made its transition.: 71–74 : 314–316
If waves are emitted from an extended source, this can lead to incoherence in the transversal direction. When looking at a cross section of a beam of light, the length over which the phase is correlated is called the transverse coherence length. In the case of Young's double-slit experiment, this would mean that if the transverse coherence length is smaller than the spacing between the two slits, the resulting pattern on a screen would look like two single-slit diffraction patterns.: 74–79
In the case of particles like electrons, neutrons, and atoms, the coherence length is related to the spatial extent of the wave function that describes the particle.: 107
Diffraction before destruction
A new way to image single biological particles has emerged since the 2010s, utilising the bright X-rays generated by X-ray free-electron lasers. These femtosecond-duration pulses will allow for the (potential) imaging of single biological macromolecules. Due to these short pulses, radiation damage can be outrun, and diffraction patterns of single biological macromolecules will be able to be obtained.
- Angle-sensitive pixel
- Atmospheric diffraction
- Brocken spectre
- Cloud iridescence
- Coherent diffraction imaging
- Diffraction from slits
- Diffraction spike
- Diffraction vs. interference
- Diffractive solar sail
- Dynamical theory of diffraction
- Electron diffraction
- Fraunhofer diffraction
- Fresnel imager
- Fresnel number
- Fresnel zone
- Point spread function
- Powder diffraction
- Schaefer–Bergmann diffraction
- Thinned-array curse
- X-ray scattering techniques
- Francesco Maria Grimaldi, Physico mathesis de lumine, coloribus, et iride, aliisque annexis libri duo (Bologna ("Bonomia"), Italy: Vittorio Bonati, 1665), page 2 Archived 2016-12-01 at the Wayback Machine:
Original : Nobis alius quartus modus illuxit, quem nunc proponimus, vocamusque; diffractionem, quia advertimus lumen aliquando diffringi, hoc est partes eius multiplici dissectione separatas per idem tamen medium in diversa ulterius procedere, eo modo, quem mox declarabimus.
Translation : It has illuminated for us another, fourth way, which we now make known and call "diffraction" [i.e., shattering], because we sometimes observe light break up; that is, that parts of the compound [i.e., the beam of light], separated by division, advance farther through the medium but in different [directions], as we will soon show.
- Cajori, Florian "A History of Physics in its Elementary Branches, including the evolution of physical laboratories." Archived 2016-12-01 at the Wayback Machine MacMillan Company, New York 1899
- Wireless Communications: Principles and Practice, Prentice Hall communications engineering and emerging technologies series, T. S. Rappaport, Prentice Hall, 2002 pg 126
- Suryanarayana, C.; Norton, M. Grant (29 June 2013). X-Ray Diffraction: A Practical Approach. Springer Science & Business Media. p. 14. ISBN 978-1-4899-0148-4. Retrieved 7 January 2023.
- Kokkotas, Kostas D. (2003). "Gravitational Wave Physics". Encyclopedia of Physical Science and Technology: 67–85. doi:10.1016/B0-12-227410-5/00300-8. ISBN 9780122274107.
- Juffmann, Thomas; Milic, Adriana; Müllneritsch, Michael; Asenbaum, Peter; Tsukernik, Alexander; Tüxen, Jens; Mayor, Marcel; Cheshnovsky, Ori; Arndt, Markus (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. 7 (5): 297–300. arXiv:1402.1867. Bibcode:2012NatNa...7..297J. doi:10.1038/nnano.2012.34. ISSN 1748-3395. PMID 22447163. S2CID 5918772.
- Francesco Maria Grimaldi, Physico-mathesis de lumine, coloribus, et iride, aliisque adnexis … [The physical mathematics of light, color, and the rainbow, and other things appended …] (Bologna ("Bonomia"), (Italy): Vittorio Bonati, 1665), pp. 1–11 Archived 2016-12-01 at the Wayback Machine: "Propositio I. Lumen propagatur seu diffunditur non solum directe, refracte, ac reflexe, sed etiam alio quodam quarto modo, diffracte." (Proposition 1. Light propagates or spreads not only in a straight line, by refraction, and by reflection, but also by a somewhat different fourth way: by diffraction.) On p. 187, Grimaldi also discusses the interference of light from two sources: "Propositio XXII. Lumen aliquando per sui communicationem reddit obscuriorem superficiem corporis aliunde, ac prius illustratam." (Proposition 22. Sometimes light, as a result of its transmission, renders dark a body's surface, [which had been] previously illuminated by another [source].)
- Jean Louis Aubert (1760). Memoires pour l'histoire des sciences et des beaux arts. Paris: Impr. de S. A. S.; Chez E. Ganeau. pp. 149.
grimaldi diffraction 0–1800.
- Sir David Brewster (1831). A Treatise on Optics. London: Longman, Rees, Orme, Brown & Green and John Taylor. pp. 95.
- Letter from James Gregory to John Collins, dated 13 May 1673. Reprinted in: Correspondence of Scientific Men of the Seventeenth Century …, ed. Stephen Jordan Rigaud (Oxford, England: Oxford University Press, 1841), vol. 2, pp. 251–255, especially p. 254 Archived 2016-12-01 at the Wayback Machine.
- Thomas Young (1 January 1804). "The Bakerian Lecture: Experiments and calculations relative to physical optics". Philosophical Transactions of the Royal Society of London. 94: 1–16. Bibcode:1804RSPT...94....1Y. doi:10.1098/rstl.1804.0001. S2CID 110408369.. (Note: This lecture was presented before the Royal Society on 24 November 1803.)
- Fresnel, Augustin-Jean (1816), "Mémoire sur la diffraction de la lumière" ("Memoir on the diffraction of light"), Annales de Chimie et de Physique, vol. 1, pp. 239–81 (March 1816); reprinted as "Deuxième Mémoire…" ("Second Memoir…") in Oeuvres complètes d'Augustin Fresnel, vol. 1 (Paris: Imprimerie Impériale, 1866), pp. 89–122. (Revision of the "First Memoir" submitted on 15 October 1815.)
- Fresnel, Augustin-Jean (1818), "Mémoire sur la diffraction de la lumière" ("Memoir on the diffraction of light"), deposited 29 July 1818, "crowned" 15 March 1819, published in Mémoires de l'Académie Royale des Sciences de l'Institut de France, vol. V (for 1821 & 1822, printed 1826), pp. 339–475; reprinted in Oeuvres complètes d'Augustin Fresnel, vol. 1 (Paris: Imprimerie Impériale, 1866), pp. 247–364; partly translated as "Fresnel's prize memoir on the diffraction of light", in H. Crew (ed.), The Wave Theory of Light: Memoirs by Huygens, Young and Fresnel, American Book Company, 1900, pp. 81–144. (First published, as extracts only, in Annales de Chimie et de Physique, vol. 11 (1819), pp. 246–96, 337–78.)
- Christiaan Huygens, Traité de la lumiere … Archived 2016-06-16 at the Wayback Machine (Leiden, Netherlands: Pieter van der Aa, 1690), Chapter 1. From p. 15 Archived 2016-12-01 at the Wayback Machine: "J'ay donc monstré de quelle façon l'on peut concevoir que la lumiere s'etend successivement par des ondes spheriques, … " (I have thus shown in what manner one can imagine that light propagates successively by spherical waves, … ) (Note: Huygens published his Traité in 1690; however, in the preface to his book, Huygens states that in 1678 he first communicated his book to the French Royal Academy of Sciences.)
- Baker, B.B. & Copson, E.T. (1939), The Mathematical Theory of Huygens' Principle, Oxford, pp. 36–40.
- Dietrich Zawischa. "Optical effects on spider webs". Retrieved 21 September 2007.
- Arumugam, Nadia (9 September 2013). "Food Explainer: Why Is Some Deli Meat Iridescent?". Slate. The Slate Group. Archived from the original on 10 September 2013. Retrieved 9 September 2013.
- Andrew Norton (2000). Dynamic fields and waves of physics. CRC Press. p. 102. ISBN 978-0-7503-0719-2.
- Chiao, R. Y.; Garmire, E.; Townes, C. H. (1964). "Self-Trapping of Optical Beams". Physical Review Letters. 13 (15): 479–482. Bibcode:1964PhRvL..13..479C. doi:10.1103/PhysRevLett.13.479.
- John M. Cowley (1975) Diffraction physics (North-Holland, Amsterdam) ISBN 0-444-10791-6
- Halliday, David; Resnick, Robert; Walker, Jerl (2005), Fundamental of Physics (7th ed.), USA: John Wiley and Sons, Inc., ISBN 978-0-471-23231-5
- Grant R. Fowles (1975). Introduction to Modern Optics. Courier Corporation. ISBN 978-0-486-65957-2.
- Hecht, Eugene (2002). Optics (4th ed.). United States of America: Addison Wesley. ISBN 978-0-8053-8566-3.
- Ayahiko Ichimiya; Philip I. Cohen (13 December 2004). Reflection High-Energy Electron Diffraction. Cambridge University Press. ISBN 978-0-521-45373-8. Archived from the original on 16 July 2017.
- Neutze, Richard; Wouts, Remco; van der Spoel, David; Weckert, Edgar; Hajdu, Janos (August 2000). "Potential for biomolecular imaging with femtosecond X-ray pulses". Nature. 406 (6797): 752–757. Bibcode:2000Natur.406..752N. doi:10.1038/35021099. ISSN 1476-4687. PMID 10963603. S2CID 4300920.
- Chapman, Henry N.; Caleman, Carl; Timneanu, Nicusor (17 July 2014). "Diffraction before destruction". Philosophical Transactions of the Royal Society B: Biological Sciences. 369 (1647): 20130313. doi:10.1098/rstb.2013.0313. PMC 4052855. PMID 24914146.
|
Easy-to-follow video tutorials help you learn software, creative, and business skills.Become a member
Another convenience tool AutoCAD gives us is the Polygon command. Polygon allow us to create equilateral shapes having as many sides as we desire. In this lesson, we'll explore the workflow behind the Polygon command. On my screen, I've created some polygon examples. It's important to note that every polygon that we create is based on an imaginary circle. If I pan this up, you can see how a circle could be associated with each of these shapes. In fact, the way we draw polygons is very similar to creating a circle. First, we tell AutoCAD how many sides the polygon has, then we specify the center point, followed by the radius. Now, there is one other thing that AutoCAD needs. It will need to know if the polygon is inscribed or circumscribed. In other words, does the polygon fall on the inside or the outside of the imaginary circle? The way to know which method to use depends on how your polygon was dimensioned. If it was dimensioned from point to point, it's an inscribed polygon, because it falls on the inside of the circle, and this dimension represents the circle's diameter. If the polygon is dimensioned from face to face, it's circumscribed because the polygon falls on the outside of the imaginary circle, and this dimension also represents this circle's diameter. Knowing this, I'd like to create some polygons. I'm going to pan the drawing up, and let's see if we can re-create the general geometry of the stop sign. To launch the Polygon command, we can find it up here in the Draw panel. It actually shares the same menu as the rectangle command. Now, for the number of sides, I'm going to choose 8. I'll be creating the large octagon first. I'll press Enter. I will then click onscreen to specify the center of the polygon. Now, is this polygon inscribed or circumscribed? Well, since it's dimensioned from face to face, this is circumscribed. It falls on the outside of the imaginary circle. I'll choose circumscribed. And then what is the radius of the circle? Well, I can see the diameter is 30, so the radius must be 15. Next, I'd like to create the smaller octagon. To do that, I will relaunch the Polygon command. It becomes the default up here in the Draw panel. I will accept 8 for the number of sides. Now, I need to specify the center. Here's an interesting fact: even though this polygon was created from an imaginary circle, the polygon itself has no center point. So, I'm going to press Escape and cancel this command momentarily. A really quick way to find the center of this polygon would be to launch the Line command and then use my running object snap to snap to the opposite corners.
I'll press Escape when I'm finished. My new polygon will be created from the midpoint of this line. I'll launch polygon again. I'll accept 8. I will then use my running object snap to snap to the middle of this line. This polygon is also going to be circumscribed. And what is the radius of the circle? Well, the larger one had a radius of 15 and I can see the smaller one has a radius that's 1 unit less than that, so I'll type 14 and hit Enter. Now that I'm finished with this line segment, I'll select it and press Delete.
Finally, let's create the carriage bolt geometry that holds the sign to the pole. To view these dimensions, I'm going to click to the lower left and then I'll pull up and click again to create a window selection. Once I've selected that geometry, I'll click the top hot spot on the view cube. This will focus my attention on that area. I will then press Escape to deselect the objects. It looks like the carriage bolt is a hexagon. It also looks like it's dimensioned from point to point so this one is inscribed. We can also see that the center of this polygon falls 3 units below the middle of the top of the sign. Now that I know the dimensions, I'd like to restore my previous view. I could do that by rolling my mouse wheel backwards. Another way would be to come over to this navigation bar. Notice there is a Zoom tool here. If I click the flyout right beneath the tool, I can select Zoom Previous to go back to my previous view. To create the first carriage bolt, I'll launch the polygon command. It has 6 sides. To find the center of the polygon, I'm going to use temporary tracking. I'll type TK and hit Enter. My first tracking point will be the middle of the top of the sign. I will then pull straight down 3 units and hit Enter. Now that I'm where I want to be, I'll hit Enter again to resume the Polygon command. This polygon is inscribed.
And what is the radius of the circle? Well, the diameter is obviously 1 so the radius must be .5. To create the final carriage bolt, I will press the spacebar to relaunch the polygon command. I will hit Enter to accept the number of sides. I'm going to use TK to find the center point. I will snap to the middle of the bottom of the sign and pull straight up 3 units. I will then hit Enter to return the Polygon command. This polygon is also inscribed and has a radius of .5. As you can see, once you understand the difference between an inscribed and a circumscribed polygon, creating these shapes is as easy as drawing a circle.
Get unlimited access to all courses for just $25/month.Become a member
100 Video lessons · 11625 Viewers
56 Video lessons · 10693 Viewers
83 Video lessons · 8553 Viewers
109 Video lessons · 4159 Viewers
Access exercise files from a button right under the course name.
Search within course videos and transcripts, and jump right to the results.
Remove icons showing you already watched videos if you want to start over.
Make the video wide, narrow, full-screen, or pop the player out of the page into its own window.
Click on text in the transcript to jump to that spot in the video. As the video plays, the relevant spot in the transcript will be highlighted.
Your file was successfully uploaded.
|
This set of MCQs helps you brush up on important math topics and prepare you to dive into skill practice.
In circle X, chords AB and CD are congruent and AB is 9 units from X. Find the distance from CD to X.
How many chords?
The region between the chord and either of the arcs is called____.
An angle whose vertex is the center of a circle,
If ∠DAB = 81° and ∠ABD = 69°, the value of ∠ACB is ___°.
The bisectors of opposite angles of a cyclic quadrilateral ABCD intersect the circle circumscribing it at the points P and Q. If radius of the circle is 18 cm, find the distance between points P and Q.
Segments with endpoints on a circle are called _____.
If BC is the diameter of the circle and ∠BAO = 35°. Then find the value of ∠ADC.
|
Decryption is the process of converting encrypted data into recognizable information. It is the opposite of encryption, which takes readable data and makes it unrecognizable.
Files and data transfers may be encrypted to prevent unauthorized access. If someone tries to view an encrypted document, it will appear as a random series of characters. If someone tries to "snoop" on an encrypted network connection, the data will not make any sense. So how is it possible to view encrypted data? The answer is decryption.
How Decryption Works
There are several ways to encrypt files, but most methods involve one or more "keys." When encrypting a file, the key may be a single password. Network encryption often involves a public key and a private key shared between each end of the data transfer.
Decryption works by applying the opposite conversion algorithm used to encrypt the data. The same key is required to return the encrypted data to its original state. For example, if the password "ABC123" is used to encrypt a file, the same password is needed to decrypt the file.
Secure network transfers, including Internet connections, handle encryption and decryption in the background. Protocols such as HTTPS and Secure SMTP encrypt and decrypt data on the fly. These protocols automatically generate a secure key for each encrypted network transfer and do not require a password.
NOTE: Encrypting data is a smart way to protect private information from prying eyes. But make sure to use a password you will remember when encrypting files. If you cannot remember your password, the data may be unrecoverable.
|
Actuation is the process of conversion of energy to mechanical form. A device that
accomplishes this conversion is called actuator.
Actuator plays a very important role while implementing control. The controller provides
command signal to the actuator for actuation.
The control codes aims at “deriving the actuator when an event has occurred”
Actuators for Robots:
1. Actuators are used in order to produce mechanical movement in robots.
2. Actuators are the muscles of robots. There are many types of actuators available depending on
the load involved. The term load is associated with many factors including force, torque, speed of
operation, accuracy, precision and power consumption.
3. CHARACTERISTICS OF ACTUATING SYSTEMS:
Weight, Power-to-weight Ratio, Operating pressure:
1) Stepper motors are generally heavier than servomotors for the same power.
2) The high the voltage of electric motors, the better power-to- weight ratio.
3) Pneumatic systems delivers the lowest power-to-weight ratio.
4) Hydraulic systems have the highest power-to-weight ratio. In these systems, the weight is
actually composed of two portions. One is the hydraulic actuators, and the other is the hydraulic
power unit (pump, cylinders, rams, reservoirs, filter, and electric motor). If the power unit must
also move with the robot, the total power-to-weight ratio will be much less.
4. Types of Actuators:
1. Electric Actuators.
• Stepper Motor
• DC Motor
2. Hydraulic Actuators.
3. Pneumatic Actuators.
4. Magnetostrictive Actuators.
5. Shape Memory Metal Actuators.
5. Electrical actuators:
The electric actuators generally require reduction gears of high ratios.
The high-gear ratio linearizes the system dynamics and reduces the coupling effects.
This is an added advantage of the electric actuators but at the cost of increased joint friction,
elasticity and backlash.
On the author hand use of hydraulic or pneumatic actuators to directly drive the joint
minimizes the drawbacks due to friction, elasticity, and backlash.
1) Easy to control
2) From W to MW
3) Normally high velocities 1000 - 10000 rpm
4) Several types
(5) Accurate servo control
6) Ideal torque for driving
7) Excellent efficiency
8) Autonomous power system
6. ELECTRICAL ACTUATORS:
1. Mainly rotating but also linear ones are available.
2. Linear movement with gear or with real linear motor.
ELECTRICAL ACTUATOR TYPES
1. Servo Motor
2. Stepper Motor
4. Brushless DC-motors
5. Asynchronous motors
6. Synchronous motors
7. Reluctance motors.
7. Servo Motor:
The servo motor is most commonly used for high technology devices in the industrial
application like automation technology.
It is a self contained electrical device, that rotate parts of a machine with high efficiency and
The output shaft of this motor can be moved to a particular angle.
Servo motors are mainly used in home electronics, toys, cars, airplanes, etc.
8. Types of Servo Motors
Servo motors are classified into different types based on their application, such as AC servo
motor, DC servo motor, brushless DC servo motor, positional rotation, continuous rotation and
linear servo motor etc.
Typical servo motors comprise of three wires namely, power control and ground.
The shape and size of these motors depend on their applications.
Servo motor is the most common type which is used in hobby applications, robotics due to
their simplicity, affordability and reliability of control by microprocessors.
9. DC Servo Motor
The motor which is used as a DC servo motor generally have a separate DC source in the field
of winding & armature winding.
The control can be archived either by controlling the armature current or field current.
Field control includes some particular advantages over armature control.
In the same way armature control includes some advantages
over field control.
Based on the applications the control should be applied to
the DC servo motor.
DC servo motor provides very accurate and also fast
respond to start or stop command signals due to the low
armature inductive reactance.
DC servo motors are used in similar equipment's and
computerized numerically controlled machines.
10. AC Servo Motor
AC servo motor is an AC motor that includes
encoder is used with controllers for giving
closed loop control and feedback.
This motor can be placed to high accuracy and
also controlled precisely as compulsory for the
Frequently these motors have higher designs of tolerance or better bearings and some simple
designs also use higher voltages in order to accomplish greater torque.
Applications of an AC motor mainly involve in automation, robotics, CNC machinery, and other
applications a high level of precision and needful versatility.
11. Positional Rotation Servo Motor
• Positional rotation servo motor is a most common type of servo motor.
• The shaft’s o/p rotates in about 180o.
• It includes physical stops located in the gear mechanism to stop turning outside these limits to
guard the rotation sensor.
• These common servos involve in radio controlled water, radio controlled cars, aircraft, robots,
toys and many other applications.
12. Continuous Rotation Servo Motor
Continuous rotation servo motor is quite related to the common positional rotation servo motor,
but it can go in any direction indefinitely.
The control signal, rather than set the static position of the servo, is understood as the speed
and direction of rotation.
The range of potential commands sources the servo to rotate clockwise or anticlockwise as
preferred, at changing speed, depending on the command signal.
This type of motor is used in a radar dish if you are riding one on a robot or you can use one as
a drive motor on a mobile robot.
13. Applications of Servo Motor
The servo motor is small and efficient, but serious to use in some applications like precise position
control. This motor is controlled by a pulse width modulator signal.
The applications of servo motors mainly involve in computers, robotics, toys, CD/DVD players, etc.
These motors are extensively used in those applications where a particular task is to be done frequently in
an exact manner.
The servo motor is used in robotics to activate movements, giving the arm to its precise angle.
The Servo motor is used to start, move and stop conveyor belts carrying the product along with many
stages. For instance, product labeling, bottling and packaging
The servo motor is built into the camera to correct a lens of the camera to improve out of focus
The servo motor is used in robotic vehicle to control the robot wheels, producing plenty torque to
move, start and stop the vehicle and control its speed.
The servo motor is used in solar tracking system to correct the angle of the panel so that each solar
panel stays to face the sun
The Servo motor is used in metal forming and cutting machines to provide specific motion control for
The Servo motor is used in automatic door openers to control the door in public places like
supermarkets, hospitals and theatres
14. Stepper Motor:
A stepper motor is an electromechanical device it converts electrical power into mechanical
Also it is a brushless, synchronous electric motor that can divide a full rotation into an
expansive number of steps.
The motor’s position can be controlled accurately without any feedback mechanism, as long as
the motor is carefully sized to the application.
Stepper motors are similar to switched reluctance motors.
15. The stepper motor uses the theory of operation for magnets to make the motor shaft turn a
precise distance when a pulse of electricity is provided.
The stator has eight poles, and the rotor has six poles. The rotor will require 24 pulses of
electricity to move the 24 steps to make one complete revolution.
Another way to say this is that the rotor will move precisely 15° for each pulse of electricity
that the motor receives.
16. Working Principle:
Stepper motors operate differently from DC brush motors, which rotate when voltage is applied
to their terminals.
Stepper motors, on the other hand, effectively have multiple toothed electromagnets arranged
around a central gear-shaped piece of iron.
The electromagnets are energized by an external control circuit, for example a microcontroller.
To make the motor shaft turn, first one electromagnet is given power, which makes the gear’s
teeth magnetically attracted to the electromagnet’s teeth.
The point when the gear’s teeth are thus aligned to the first electromagnet, they are slightly
offset from the next electromagnet.
So when the next electromagnet is turned ON and the first is turned OFF, the gear rotates
slightly to align with the next one and from there the process is repeated.
17. Each of those slight rotations is called a step, with an integer number of steps making a full
In that way, the motor can be turned by a precise. Stepper motor doesn’t rotate continuously,
they rotate in steps.
There are 4 coils with 90o angle between each other fixed on the stator. The stepper motor
connections are determined by the way the coils are interconnected.
In stepper motor, the coils are not connected together. The motor has 90o rotation step with the
coils being energized in a cyclic order, determining the shaft rotation direction.
The working of this motor is shown by operating the switch. The coils are activated in series
in 1 sec intervals. The shaft rotates 90o each time the next coil is activated. Its low speed
torque will vary directly with current.
18. Types of Stepper Motor:
There are three main types of stepper motors, they are:
1.Permanent magnet stepper
2.Variable reluctance stepper
3.Hybrid synchronous stepper
Permanent Magnet Stepper Motor: Permanent magnet motors use a permanent magnet (PM) in
the rotor and operate on the attraction or repulsion between the rotor PM and the stator
Variable Reluctance Stepper Motor: Variable reluctance (VR) motors have a plain iron rotor
and operate based on the principle that minimum reluctance occurs with minimum gap, hence the
rotor points are attracted toward the stator magnet poles.
Hybrid Synchronous Stepper Motor: Hybrid stepper motors are named because they use a
combination of permanent magnet (PM) and variable reluctance (VR) techniques to achieve
maximum power in a small package size.
19. Advantages of Stepper Motor:
1.The rotation angle of the motor is proportional to the input pulse.
2.The motor has full torque at standstill.
3.Precise positioning and repeatability of movement since good stepper motors have an accuracy
of 3 – 5% of a step and this error is non cumulative from one step to the next.
4.Excellent response to starting, stopping and reversing.
5.Very reliable since there are no contact brushes in the motor. Therefore the life of the motor is
simply dependent on the life of the bearing.
6.The motors response to digital input pulses provides open-loop control, making the motor
simpler and less costly to control.
7.It is possible to achieve very low speed synchronous rotation with a load that is directly coupled
to the shaft.
8.A wide range of rotational speeds can be realized as the speed is proportional to the frequency of
the input pulses.
1.Industrial Machines – Stepper motors are used in automotive gauges and machine tooling
automated production equipment's.
2.Security – new surveillance products for the security industry.
3.Medical – Stepper motors are used inside medical scanners, samplers, and also found inside
digital dental photography, fluid pumps, respirators and blood analysis machinery.
4.Consumer Electronics – Stepper motors in cameras for automatic digital camera focus and
And also have business machines applications, computer peripherals applications.
21. Permanent Magnet DC Motor
In a DC motor, an armature rotates inside a magnetic field. Basic working
principle of DC motor is based on the fact that whenever a current carrying
conductor is placed inside a magnetic field, there will be mechanical force
experienced by that conductor.
Working Principle of Permanent Magnet DC Motor or PMDC Motor
The working principle of PMDC motor is just similar to the general working principle of DC
That is when a carrying conductor comes inside a magnetic field, a mechanical force will be
experienced by the conductor and the direction of this force is governed by Fleming’s left hand
As in a permanent magnet DC motor, the armature is placed inside the magnetic field of
permanent magnet; the armature rotates in the direction of the generated force.
22. Here each conductor of the armature experiences the mechanical force F = B.I.L Newton where,
B is the magnetic field strength in Tesla (weber / m2), I is the current in Ampere flowing through
that conductor and L is length of the conductor in metre comes under the magnetic field.
Each conductor of the armature experiences a force and the compilation of those forces produces
a torque, which tends to rotate the armature.
Equivalent Circuit of Permanent Magnet DC Motor or PMDC Motor
As in PMDC motor the field is produced by permanent magnet,
there is no need of drawing field coils in the equivalent circuit of
permanent magnet DC motor.
The supply voltage to the armature will have armature resistance
drop and rest of the supply voltage is countered by back emf of the
motor. Hence voltage equation of the motor is given by,
Where, I is armature current and R is armature resistance of the
motor. Eb is the back emf and V is the supply voltage.
23. Advantages of Permanent Magnet DC Motor or PMDC Motor
PMDC motor have some advantages over other types of DC motors. They are :
1. No need of field excitation arrangement.
2. No input power in consumed for excitation which improve efficiency of DC motor.
3. No field coil hence space for field coil is saved which reduces the overall size of the motor.
4. Cheaper and economical for fractional kW rated applications.
24. Disadvantages of Permanent Magnet DC Motor or PMDC Motor
1.In this case, the armature reaction of DC motor cannot be compensated hence the
magnetic strength of the field may get weak due to demagnetizing effect armature
2.There is also a chance of getting the poles permanently demagnetized (partial) due to
excessive armature current during starting, reversal and overloading condition of the
3.Another major disadvantage of PMDC motor is that, the field in the air gap is fixed
and limited and it cannot be controlled externally. Therefore, very efficient speed control
of DC motor in this type of motor is difficult.
Applications of Permanent Magnet DC Motor or PMDC Motor
PMDC motor is extensively used where small DC motors are required and also very effective
control is not required, such as in automobiles starter, toys, wipers, washers, hot blowers, air
conditioners, computer disc drives and in many more.
25. Hydraulic Actuators:
Hydraulic actuators can produce large force/torque to drive the manipulator joints without the use
of reduction gearing and are easily applied for robotic position control. But hydraulic systems are
cumbersome and messy and require a great deal of equipment such as pumps, actuators, hoses,
and servo valves. In application where position and/or torque must be accurately controlled,
hydraulic actuators prove disadvantageous due to friction of seals. Leakage. Viscosity of oil, and
its complex temperature dependence.
26. Hydraulic system generally consists of the following parts:
1. Hydraulic linear or rotary cylinders and rams to provide the force or torque needed to move the
joints and are controlled by servo valve or manual valve.
2. A hydraulic pump to provide high pressure fluid to the system
3. Electric motor to operate the hydraulic pump.
4. Cooling system to get rid of heat (cooling fans, radiators, and cooled air).
5. Reservoir to keep fluid supply available to the system.
6. Servo valve which is a very sensitive valve that controls the amount and the rate of the fluid to
the cylinders. The servo valve is generally driven by a hydraulic servomotor.
7. Sensors to control the motion of the cylinders (position, velocity, magnetic, touch,..)
8. Connecting hoses to transport the pressurized fluid.
9. Safety check valves, holding valves.
27. Pneumatic Actuators:
Pneumatic actuators possess all the disadvantages of hydraulic actuators except that these are
relatively cleaner. Pneumatic actuators are difficult to control accurately due to high friction of
seals and compressibility of air.
Like hydraulic except power from compressed air.
Fast on/off type tasks.
Big forces with elasticity.
No hydraulic oil leak problems.
Speed control is not possible because the air pressure depends on many variables that are out of
28. Basic Robot Motions: A robot must have a control system to operate its drive system, which is
used to move the arm, wrist, and body of a robot at various paths. When different industrial robots
are compared with their control system, they can be divided into four major types. They are:
Limited Sequence Robots
Playback Robots with Point – Point Control
Playback Robots with Continuous Path Control
Limited Sequence Robots:
The limited sequence robots are incorporated with the mechanical stops and limit switches for
determining the finishing points of its joints. These robots do not require any sort of
programming, and just uses the manipulator to perform the operation. As a result, every joint can
only travel to the intense limits. It is considered as the smallest level of controlling, and it will be
best for simple operations like pick & place process. This type of robots is generally equipped
with the pneumatic drive system.
29. Playback Robots:
The playback robots are capable of performing a task by teaching the position. These positions are
stored in the memory, and done frequently by the robot. Generally, these playback robots are
employed with a complicated control system. It can be divided into two important types, namely:
Point to Point control robots
Continuous Path control robots
Playback Robots with Point to Point Control:
The point to point robots are shortly called as PTP. It has got the capability to travel from one
position to another. The desired paths are taught and stored in the control unit memory. These
robots do not move from the desired location for controlling its path. It can be moved in a small
distance only with the help of programming. This type of robots can be used for spot welding,
loading & unloading, and drilling operations.
30. Playback Robots with Continuous Path Control:
The continuous path control is also known as CP control. This type of robots can control the path,
and can end on any specified position. These robots commonly move in the straight line. The
initial and final point is first described by the programmer, and the control unit defines the
individual joints. This helps the robot to travel in a straight line. Likewise, it can also move in
a curved path by moving its arm at the desired points. In these robots, the microprocessor is used
as a controller. Some of the applications are arc welding, spray painting, and gluing operations.
The intelligent robots can play back the defined motion, and can also work according to their
environment. It uses digital computer as a controller. The sensor is incorporated in these robots
for receiving the information during the process. The programming language will be based
on high level language. This kind of robots is capable of communicating with the programmers in
the work volume. It will be best for arc welding, and assembly purposes.
|
In this geometry worksheet, students plot coordinate pairs and connect the points to create different shapes. They give th shapes its polygon's name. They differentiate between simple functions and rectangular hyperbola.
9th - 11th Math 9 Views 64 Downloads
New Review Student Workbook: Geometry and Measurement
The volume of this geometry packet is obtuse! Packed with every topic and type of problem learned in the class, each page is nicely organized by section and contains a instructional activity and sometimes reference examples.
6th - 11th Math CCSS: Adaptable
Transformations in the Coordinate Plane
Your learners connect the new concepts of transformations in the coordinate plane to their previous knowledge using the solid vocabulary development in this unit. Like a foreign language, mathematics has its own set of vocabulary terms...
9th - 12th Math CCSS: Designed
Connecting Algebra and Geometry Through Coordinates
This unit on connecting algebra and geometry covers a number of topics including worksheets on the distance formula, finding the perimeter and area of polynomials, the slope formula, parallel and perpendicular lines, parallelograms,...
9th - 10th Math CCSS: Designed
The Complex Geometry of Islamic Design
Discover the prevalence of geometric design in Islamic culture with this wonderful informational video. It begins with an overview of the complexity of designs dating back to the eighth century during early Islam, and then delves into...
5 mins 7th - 12th Math CCSS: Adaptable
Why Do Honeybees Love Hexagons?
Float like a butterfly, think like a bee! Build a huge hive, hexagonally! Find out the reason that hexagons are the most efficient storage shape for the honeybees' honeycombs. This neat little video would be a sweet addition to your life...
4 mins 6th - 12th Math CCSS: Adaptable
|
The Moon is an astronomical body that orbits Earth as its only natural satellite. It is the fifth-largest satellite in the Solar System, and the largest among planetary satellites relative to the size of the planet that it orbits (its primary). The Moon is, after Jupiter's satellite Io, the second-densest satellite in the Solar System among those whose densities are known.
Full moon seen from North America
|384399 km (0.00257 AU)|
(27 d 7 h 43 min 11.5 s)
(29 d 12 h 44 min 2.9 s)
Average orbital speed
|Inclination||5.145° to the ecliptic|
Regressing by one revolution in 18.61 years
Progressing by one
revolution in 8.85 years
|1737.4 km |
(0.2727 of Earth's)
|1738.1 km |
(0.2725 of Earth's)
|1736.0 km |
(0.2731 of Earth's)
|Circumference||10921 km (equatorial)|
|3.793×107 km2 |
(0.074 of Earth's)
|Volume||2.1958×1010 km3 |
(0.020 of Earth's)
|Mass||7.342×1022 kg |
(0.012300 of Earth's)
0.606 × Earth
|1.62 m/s2 (0.1654 g)|
Sidereal rotation period
|27.321661 d (synchronous)|
Equatorial rotation velocity
North pole right ascension
North pole declination
|29.3 to 34.1 arcminutes|
|Composition by volume|
The Moon is thought to have formed about 4.51 billion years ago, not long after Earth. The most widely accepted explanation is that the Moon formed from the debris left over after a giant impact between Earth and a Mars-sized body called Theia. New research of Moon rocks, although not rejecting the Theia hypothesis, suggests that the Moon may be older than previously thought.
The Moon is in synchronous rotation with Earth, and thus always shows the same side to Earth, the near side. The near side is marked by dark volcanic maria that fill the spaces between the bright ancient crustal highlands and the prominent impact craters. After the Sun, the Moon is the second-brightest regularly visible celestial object in Earth's sky. Its surface is actually dark, although compared to the night sky it appears very bright, with a reflectance just slightly higher than that of worn asphalt. Its gravitational influence produces the ocean tides, body tides, and the slight lengthening of the day.
The Moon's average orbital distance is 384,402 km (238,856 mi), or 1.28 light-seconds. This is about thirty times the diameter of Earth. The Moon's apparent size in the sky is almost the same as that of the Sun, since the star is about 400 times the lunar distance and diameter. Therefore, the Moon covers the Sun nearly precisely during a total solar eclipse. This matching of apparent visual size will not continue in the far future because the Moon's distance from Earth is gradually increasing.
The Moon was first reached in September 1959 by the Soviet Union's Luna 2, an unmanned spacecraft, followed by the first successful soft landing by Luna 9 in 1966. The United States' NASA Apollo program achieved the only manned lunar missions to date, beginning with the first manned orbital mission by Apollo 8 in 1968, and six manned landings between 1969 and 1972, with the first being Apollo 11 in July 1969. These missions returned lunar rocks which have been used to develop a geological understanding of the Moon's origin, internal structure, and the Moon's later history. Since the 1972 Apollo 17 mission the Moon has been visited only by unmanned spacecraft.
Both the Moon's natural prominence in the earthly sky and its regular cycle of phases as seen from Earth have provided cultural references and influences for human societies and cultures since time immemorial. Such cultural influences can be found in language, lunar calendar systems, art, and mythology.
Name and etymology
The usual English proper name for Earth's natural satellite is "the Moon", which in nonscientific texts is usually not capitalized. The noun moon is derived from Old English mōna, which (like all Germanic language cognates) stems from Proto-Germanic *mēnô, which comes from Proto-Indo-European *mḗh₁n̥s "moon", "month", which comes from the Proto-Indo-European root *meh₁- "to measure", the month being the ancient unit of time measured by the Moon. Occasionally, the name "Luna" is used. In literature, especially science fiction, "Luna" is used to distinguish it from other moons, while in poetry, the name has been used to denote personification of Earth's moon.
The modern English adjective pertaining to the Moon is lunar, derived from the Latin word for the Moon, luna. The adjective selenic (usually only used to refer to the chemical element selenium) is so rarely used to refer to the Moon that this meaning is not recorded in most major dictionaries. It is derived from the Ancient Greek word for the Moon, σελήνη (selḗnē), from which is however also derived the prefix "seleno-", as in selenography, the study of the physical features of the Moon, as well as the element name selenium. Both the Greek goddess Selene and the Roman goddess Diana were alternatively called Cynthia. The names Luna, Cynthia, and Selene are reflected in terminology for lunar orbits in words such as apolune, pericynthion, and selenocentric. The name Diana comes from the Proto-Indo-European *diw-yo, "heavenly", which comes from the PIE root *dyeu- "to shine," which in many derivatives means "sky, heaven, and god" and is also the origin of Latin dies, "day".
The Moon formed 4.51 billion years ago, some 60 million years after the origin of the Solar System. Several forming mechanisms have been proposed, including the fission of the Moon from Earth's crust through centrifugal force (which would require too great an initial spin of Earth), the gravitational capture of a pre-formed Moon (which would require an unfeasibly extended atmosphere of Earth to dissipate the energy of the passing Moon), and the co-formation of Earth and the Moon together in the primordial accretion disk (which does not explain the depletion of metals in the Moon). These hypotheses also cannot account for the high angular momentum of the Earth–Moon system.
The prevailing hypothesis is that the Earth–Moon system formed after an impact of a Mars-sized body (named Theia) with the proto-Earth (giant impact). The impact blasted material into Earth's orbit and then the material accreted and formed the Moon.
The Moon's far side has a crust that is 50 km (31 mi) thicker than that of the near side. This is thought to be because the Moon fused from two different bodies.
This hypothesis, although not perfect, perhaps best explains the evidence. Eighteen months prior to an October 1984 conference on lunar origins, Bill Hartmann, Roger Phillips, and Jeff Taylor challenged fellow lunar scientists: "You have eighteen months. Go back to your Apollo data, go back to your computer, do whatever you have to, but make up your mind. Don't come to our conference unless you have something to say about the Moon's birth." At the 1984 conference at Kona, Hawaii, the giant impact hypothesis emerged as the most consensual theory.
Before the conference, there were partisans of the three "traditional" theories, plus a few people who were starting to take the giant impact seriously, and there was a huge apathetic middle who didn't think the debate would ever be resolved. Afterward, there were essentially only two groups: the giant impact camp and the agnostics.
Giant impacts are thought to have been common in the early Solar System. Computer simulations of giant impacts have produced results that are consistent with the mass of the lunar core and the angular momentum of the Earth–Moon system. These simulations also show that most of the Moon derived from the impactor, rather than the proto-Earth. However, more recent simulations suggest a larger fraction of the Moon derived from the proto-Earth. Other bodies of the inner Solar System such as Mars and Vesta have, according to meteorites from them, very different oxygen and tungsten isotopic compositions compared to Earth. However, Earth and the Moon have nearly identical isotopic compositions. The isotopic equalization of the Earth-Moon system might be explained by the post-impact mixing of the vaporized material that formed the two, although this is debated.
The impact released a lot of energy and then the released material re-accreted into the Earth–Moon system. This would have melted the outer shell of Earth, and thus formed a magma ocean. Similarly, the newly formed Moon would also have been affected and had its own lunar magma ocean; its depth is estimated from about 500 km (300 miles) to 1,737 km (1,079 miles).
In 2001, a team at the Carnegie Institute of Washington reported the most precise measurement of the isotopic signatures of lunar rocks. To their surprise, the rocks from the Apollo program had the same isotopic signature as rocks from Earth, however they differed from almost all other bodies in the Solar System. Indeed, this observation was unexpected, because most of the material that formed the Moon was thought to come from Theia and it was announced in 2007 that there was less than a 1% chance that Theia and Earth had identical isotopic signatures. Other Apollo lunar samples had in 2012 the same titanium isotopes composition as Earth, which conflicts with what is expected if the Moon formed far from Earth or is derived from Theia. These discrepancies may be explained by variations of the giant impact hypothesis.
The Moon is a very slightly scalene ellipsoid due to tidal stretching, with its long axis displaced 30° from facing the Earth (due to gravitational anomalies from impact basins). Its shape is more elongated than current tidal forces can account for. This 'fossil bulge' indicates that the Moon solidified when it orbited at half its current distance to the Earth, and that it is now too cold for its shape to adjust to its orbit.
The Moon is a differentiated body. It has a geochemically distinct crust, mantle, and core. The Moon has a solid iron-rich inner core with a radius possibly as small as 240 kilometres (150 mi) and a fluid outer core primarily made of liquid iron with a radius of roughly 300 kilometres (190 mi). Around the core is a partially molten boundary layer with a radius of about 500 kilometres (310 mi). This structure is thought to have developed through the fractional crystallization of a global magma ocean shortly after the Moon's formation 4.5 billion years ago.
Crystallization of this magma ocean would have created a mafic mantle from the precipitation and sinking of the minerals olivine, clinopyroxene, and orthopyroxene; after about three-quarters of the magma ocean had crystallised, lower-density plagioclase minerals could form and float into a crust atop. The final liquids to crystallise would have been initially sandwiched between the crust and mantle, with a high abundance of incompatible and heat-producing elements.
Consistent with this perspective, geochemical mapping made from orbit suggests the crust of mostly anorthosite. The Moon rock samples of the flood lavas that erupted onto the surface from partial melting in the mantle confirm the mafic mantle composition, which is more iron-rich than that of Earth. The crust is on average about 50 kilometres (31 mi) thick.
The Moon is the second-densest satellite in the Solar System, after Io. However, the inner core of the Moon is small, with a radius of about 350 kilometres (220 mi) or less, around 20% of the radius of the Moon. Its composition is not well understood, but is probably metallic iron alloyed with a small amount of sulphur and nickel; analyzes of the Moon's time-variable rotation suggest that it is at least partly molten.
The topography of the Moon has been measured with laser altimetry and stereo image analysis. Its most visible topographic feature is the giant far-side South Pole–Aitken basin, some 2,240 km (1,390 mi) in diameter, the largest crater on the Moon and the second-largest confirmed impact crater in the Solar System. At 13 km (8.1 mi) deep, its floor is the lowest point on the surface of the Moon. The highest elevations of the surface are located directly to the northeast, and it has been suggested might have been thickened by the oblique formation impact of the South Pole–Aitken basin. Other large impact basins such as Imbrium, Serenitatis, Crisium, Smythii, and Orientale also possess regionally low elevations and elevated rims. The far side of the lunar surface is on average about 1.9 km (1.2 mi) higher than that of the near side.
The discovery of fault scarp cliffs by the Lunar Reconnaissance Orbiter suggest that the Moon has shrunk within the past billion years, by about 90 metres (300 ft). Similar shrinkage features exist on Mercury. A recent study of over 12000 images from the orbiter has observed that Mare Frigoris near the north pole, a vast basin assumed to be geologically dead, has been cracking and shifting. Since the Moon doesn't have tectonic plates, its tectonic activity is slow and cracks develop as it loses heat over the years.
The dark and relatively featureless lunar plains, clearly seen with the naked eye, are called maria (Latin for "seas"; singular mare), as they were once believed to be filled with water; they are now known to be vast solidified pools of ancient basaltic lava. Although similar to terrestrial basalts, lunar basalts have more iron and no minerals altered by water. The majority of these lavas erupted or flowed into the depressions associated with impact basins. Several geologic provinces containing shield volcanoes and volcanic domes are found within the near side "maria".
Almost all maria are on the near side of the Moon, and cover 31% of the surface of the near side, compared with 2% of the far side. This is thought to be due to a concentration of heat-producing elements under the crust on the near side, seen on geochemical maps obtained by Lunar Prospector's gamma-ray spectrometer, which would have caused the underlying mantle to heat up, partially melt, rise to the surface and erupt. Most of the Moon's mare basalts erupted during the Imbrian period, 3.0–3.5 billion years ago, although some radiometrically dated samples are as old as 4.2 billion years. Until recently, the youngest eruptions, dated by crater counting, appeared to have been only 1.2 billion years ago. In 2006, a study of Ina, a tiny depression in Lacus Felicitatis, found jagged, relatively dust-free features that, because of the lack of erosion by infalling debris, appeared to be only 2 million years old. Moonquakes and releases of gas also indicate some continued lunar activity. In 2014 NASA announced "widespread evidence of young lunar volcanism" at 70 irregular mare patches identified by the Lunar Reconnaissance Orbiter, some less than 50 million years old. This raises the possibility of a much warmer lunar mantle than previously believed, at least on the near side where the deep crust is substantially warmer because of the greater concentration of radioactive elements. Just prior to this, evidence has been presented for 2–10 million years younger basaltic volcanism inside Lowell crater, Orientale basin, located in the transition zone between the near and far sides of the Moon. An initially hotter mantle and/or local enrichment of heat-producing elements in the mantle could be responsible for prolonged activities also on the far side in the Orientale basin.
The lighter-colored regions of the Moon are called terrae, or more commonly highlands, because they are higher than most maria. They have been radiometrically dated to having formed 4.4 billion years ago, and may represent plagioclase cumulates of the lunar magma ocean. In contrast to Earth, no major lunar mountains are believed to have formed as a result of tectonic events.
The concentration of maria on the Near Side likely reflects the substantially thicker crust of the highlands of the Far Side, which may have formed in a slow-velocity impact of a second moon of Earth a few tens of millions of years after their formation.
The other major geologic process that has affected the Moon's surface is impact cratering, with craters formed when asteroids and comets collide with the lunar surface. There are estimated to be roughly 300,000 craters wider than 1 km (0.6 mi) on the Moon's near side alone. The lunar geologic timescale is based on the most prominent impact events, including Nectaris, Imbrium, and Orientale, structures characterized by multiple rings of uplifted material, between hundreds and thousands of kilometers in diameter and associated with a broad apron of ejecta deposits that form a regional stratigraphic horizon. The lack of an atmosphere, weather and recent geological processes mean that many of these craters are well-preserved. Although only a few multi-ring basins have been definitively dated, they are useful for assigning relative ages. Because impact craters accumulate at a nearly constant rate, counting the number of craters per unit area can be used to estimate the age of the surface. The radiometric ages of impact-melted rocks collected during the Apollo missions cluster between 3.8 and 4.1 billion years old: this has been used to propose a Late Heavy Bombardment of impacts.
Blanketed on top of the Moon's crust is a highly comminuted (broken into ever smaller particles) and impact gardened surface layer called regolith, formed by impact processes. The finer regolith, the lunar soil of silicon dioxide glass, has a texture resembling snow and a scent resembling spent gunpowder. The regolith of older surfaces is generally thicker than for younger surfaces: it varies in thickness from 10–20 km (6.2–12.4 mi) in the highlands and 3–5 km (1.9–3.1 mi) in the maria. Beneath the finely comminuted regolith layer is the megaregolith, a layer of highly fractured bedrock many kilometers thick.
Comparison of high-resolution images obtained by the Lunar Reconnaissance Orbiter has shown a contemporary crater-production rate significantly higher than previously estimated. A secondary cratering process caused by distal ejecta is thought to churn the top two centimeters of regolith a hundred times more quickly than previous models suggested – on a timescale of 81,000 years.
Lunar swirls are enigmatic features found across the Moon's surface. They are characterized by a high albedo, appear optically immature (i.e. the optical characteristics of a relatively young regolith), and have often a sinuous shape. Their shape is often accentuated by low albedo regions that wind between the bright swirls.
Presence of water
Liquid water cannot persist on the lunar surface. When exposed to solar radiation, water quickly decomposes through a process known as photodissociation and is lost to space. However, since the 1960s, scientists have hypothesized that water ice may be deposited by impacting comets or possibly produced by the reaction of oxygen-rich lunar rocks, and hydrogen from solar wind, leaving traces of water which could possibly persist in cold, permanently shadowed craters at either pole on the Moon. Computer simulations suggest that up to 14,000 km2 (5,400 sq mi) of the surface may be in permanent shadow. The presence of usable quantities of water on the Moon is an important factor in rendering lunar habitation as a cost-effective plan; the alternative of transporting water from Earth would be prohibitively expensive.
In years since, signatures of water have been found to exist on the lunar surface. In 1994, the bistatic radar experiment located on the Clementine spacecraft, indicated the existence of small, frozen pockets of water close to the surface. However, later radar observations by Arecibo, suggest these findings may rather be rocks ejected from young impact craters. In 1998, the neutron spectrometer on the Lunar Prospector spacecraft showed that high concentrations of hydrogen are present in the first meter of depth in the regolith near the polar regions. Volcanic lava beads, brought back to Earth aboard Apollo 15, showed small amounts of water in their interior.
The 2008 Chandrayaan-1 spacecraft has since confirmed the existence of surface water ice, using the on-board Moon Mineralogy Mapper. The spectrometer observed absorption lines common to hydroxyl, in reflected sunlight, providing evidence of large quantities of water ice, on the lunar surface. The spacecraft showed that concentrations may possibly be as high as 1,000 ppm. Using the mapper's reflectance spectra, indirect lighting of areas in shadow confirmed water ice within 20° latitude of both poles in 2018. In 2009, LCROSS sent a 2,300 kg (5,100 lb) impactor into a permanently shadowed polar crater, and detected at least 100 kg (220 lb) of water in a plume of ejected material. Another examination of the LCROSS data showed the amount of detected water to be closer to 155 ± 12 kg (342 ± 26 lb).
In May 2011, 615–1410 ppm water in melt inclusions in lunar sample 74220 was reported, the famous high-titanium "orange glass soil" of volcanic origin collected during the Apollo 17 mission in 1972. The inclusions were formed during explosive eruptions on the Moon approximately 3.7 billion years ago. This concentration is comparable with that of magma in Earth's upper mantle. Although of considerable selenological interest, this announcement affords little comfort to would-be lunar colonists – the sample originated many kilometers below the surface, and the inclusions are so difficult to access that it took 39 years to find them with a state-of-the-art ion microprobe instrument.
Analysis of the findings of the Moon Mineralogy Mapper (M3) revealed in August 2018 for the first time "definitive evidence" for water-ice on the lunar surface. The data revealed the distinct reflective signatures of water-ice, as opposed to dust and other reflective substances. The ice deposits were found on the North and South poles, although it is more abundant in the South, where water is trapped in permanently shadowed craters and crevices, allowing it to persist as ice on the surface since they are shielded from the sun.
The gravitational field of the Moon has been measured through tracking the Doppler shift of radio signals emitted by orbiting spacecraft. The main lunar gravity features are mascons, large positive gravitational anomalies associated with some of the giant impact basins, partly caused by the dense mare basaltic lava flows that fill those basins. The anomalies greatly influence the orbit of spacecraft about the Moon. There are some puzzles: lava flows by themselves cannot explain all of the gravitational signature, and some mascons exist that are not linked to mare volcanism.
The Moon has an external magnetic field of about 1–100 nanoteslas, less than one-hundredth that of Earth. The Moon does not currently have a global dipolar magnetic field and only has crustal magnetization likely acquired early in its history when a dynamo was still operating. Theoretically, some of the remnant magnetization may originate from transient magnetic fields generated during large impacts through the expansion of plasma clouds. These clouds are generated during large impacts in an ambient magnetic field. This is supported by the location of the largest crustal magnetizations situated near the antipodes of the giant impact basins.
The Moon has an atmosphere so tenuous as to be nearly vacuum, with a total mass of less than 10 tonnes (9.8 long tons; 11 short tons). The surface pressure of this small mass is around 3 × 10−15 atm (0.3 nPa); it varies with the lunar day. Its sources include outgassing and sputtering, a product of the bombardment of lunar soil by solar wind ions. Elements that have been detected include sodium and potassium, produced by sputtering (also found in the atmospheres of Mercury and Io); helium-4 and neon from the solar wind; and argon-40, radon-222, and polonium-210, outgassed after their creation by radioactive decay within the crust and mantle. The absence of such neutral species (atoms or molecules) as oxygen, nitrogen, carbon, hydrogen and magnesium, which are present in the regolith, is not understood. Water vapor has been detected by Chandrayaan-1 and found to vary with latitude, with a maximum at ~60–70 degrees; it is possibly generated from the sublimation of water ice in the regolith. These gases either return into the regolith because of the Moon's gravity or are lost to space, either through solar radiation pressure or, if they are ionized, by being swept away by the solar wind's magnetic field.
A permanent asymmetric Moon dust cloud exists around the Moon, created by small particles from comets. Estimates are 5 tons of comet particles strike the Moon's surface every 24 hours. The particles striking the Moon's surface eject Moon dust above the Moon. The dust stays above the Moon approximately 10 minutes, taking 5 minutes to rise, and 5 minutes to fall. On average, 120 kilograms of dust are present above the Moon, rising to 100 kilometers above the surface. The dust measurements were made by LADEE's Lunar Dust EXperiment (LDEX), between 20 and 100 kilometers above the surface, during a six-month period. LDEX detected an average of one 0.3 micrometer Moon dust particle each minute. Dust particle counts peaked during the Geminid, Quadrantid, Northern Taurid, and Omicron Centaurid meteor showers, when the Earth, and Moon, pass through comet debris. The cloud is asymmetric, more dense near the boundary between the Moon's dayside and nightside.
Past thicker atmosphere
In October 2017, NASA scientists at the Marshall Space Flight Center and the Lunar and Planetary Institute in Houston announced their finding, based on studies of Moon magma samples retrieved by the Apollo missions, that the Moon had once possessed a relatively thick atmosphere for a period of 70 million years between 3 and 4 billion years ago. This atmosphere, sourced from gases ejected from lunar volcanic eruptions, was twice the thickness of that of present-day Mars. The ancient lunar atmosphere was eventually stripped away by solar winds and dissipated into space.
The Moon's axial tilt with respect to the ecliptic is only 1.5424°, much less than the 23.44° of Earth. Because of this, the Moon's solar illumination varies much less with season, and topographical details play a crucial role in seasonal effects. From images taken by Clementine in 1994, it appears that four mountainous regions on the rim of Peary Crater at the Moon's north pole may remain illuminated for the entire lunar day, creating peaks of eternal light. No such regions exist at the south pole. Similarly, there are places that remain in permanent shadow at the bottoms of many polar craters, and these "craters of eternal darkness" are extremely cold: Lunar Reconnaissance Orbiter measured the lowest summer temperatures in craters at the southern pole at 35 K (−238 °C; −397 °F) and just 26 K (−247 °C; −413 °F) close to the winter solstice in north polar Hermite Crater. This is the coldest temperature in the Solar System ever measured by a spacecraft, colder even than the surface of Pluto. Average temperatures of the Moon's surface are reported, but temperatures of different areas will vary greatly depending upon whether they are in sunlight or shadow.
The Moon makes a complete orbit around Earth with respect to the fixed stars about once every 27.3 days (its sidereal period). However, because Earth is moving in its orbit around the Sun at the same time, it takes slightly longer for the Moon to show the same phase to Earth, which is about 29.5 days (its synodic period). Unlike most satellites of other planets, the Moon orbits closer to the ecliptic plane than to the planet's equatorial plane. The Moon's orbit is subtly perturbed by the Sun and Earth in many small, complex and interacting ways. For example, the plane of the Moon's orbit gradually rotates once every 18.61 years, which affects other aspects of lunar motion. These follow-on effects are mathematically described by Cassini's laws.
The Moon is exceptionally large relative to Earth: Its diameter is more than a quarter and its mass is 1/81 of Earth's. It is the largest moon in the Solar System relative to the size of its planet, though Charon is larger relative to the dwarf planet Pluto, at 1/9 Pluto's mass. The Earth and the Moon's barycentre, their common center of mass, is located 1,700 km (1,100 mi) (about a quarter of Earth's radius) beneath Earth's surface.
The Earth revolves around the Earth-Moon barycentre once a sidereal month, with 1/81 the speed of the Moon, or about 12.5 metres (41 ft) per second. This motion is superimposed on the much larger revolution of the Earth around the Sun at a speed of about 30 kilometres (19 mi) per second.
The surface area of the Moon is slightly less than the areas of North and South America combined.
Appearance from Earth
The Moon is in synchronous rotation as it orbits Earth; it rotates about its axis in about the same time it takes to orbit Earth. This results in it always keeping nearly the same face turned towards Earth. However, because of the effect of libration, about 59% of the Moon's surface can actually be seen from Earth. The side of the Moon that faces Earth is called the near side, and the opposite the far side. The far side is often inaccurately called the "dark side", but it is in fact illuminated as often as the near side: once every 29.5 Earth days. During new moon, the near side is dark.
The Moon had once rotated at a faster rate, but early in its history, its rotation slowed and became tidally locked in this orientation as a result of frictional effects associated with tidal deformations caused by Earth. With time, the energy of rotation of the Moon on its axis was dissipated as heat, until there was no rotation of the Moon relative to Earth. In 2016, planetary scientists, using data collected on the much earlier NASA Lunar Prospector mission, found two hydrogen-rich areas on opposite sides of the Moon, probably in the form of water ice. It is speculated that these patches were the poles of the Moon billions of years ago, before it was tidally locked to Earth.
The Moon has an exceptionally low albedo, giving it a reflectance that is slightly brighter than that of worn asphalt. Despite this, it is the brightest object in the sky after the Sun. This is due partly to the brightness enhancement of the opposition surge; the Moon at quarter phase is only one-tenth as bright, rather than half as bright, as at full moon. Additionally, color constancy in the visual system recalibrates the relations between the colors of an object and its surroundings, and because the surrounding sky is comparatively dark, the sunlit Moon is perceived as a bright object. The edges of the full moon seem as bright as the center, without limb darkening, because of the reflective properties of lunar soil, which retroreflects light more towards the Sun than in other directions. The Moon does appear larger when close to the horizon, but this is a purely psychological effect, known as the moon illusion, first described in the 7th century BC. The full Moon's angular diameter is about 0.52° (on average) in the sky, roughly the same apparent size as the Sun (see § Eclipses).
The Moon's highest altitude at culmination varies by its phase and time of year. The full moon is highest in the sky during winter (for each hemisphere). The 18.61-year nodal cycle has an influence on lunar standstill. When the ascending node of the lunar orbit is in the vernal equinox, the lunar declination can reach up to plus or minus 28° each month. This means the Moon can pass overhead if viewed from latitudes up to 28° north or south (of the Equator), instead of only 18°. The orientation of the Moon's crescent also depends on the latitude of the viewing location; an observer in the tropics can see a smile-shaped crescent Moon. The Moon is visible for two weeks every 27.3 days at the North and South Poles. Zooplankton in the Arctic use moonlight when the Sun is below the horizon for months on end.
The distance between the Moon and Earth varies from around 356,400 km (221,500 mi) to 406,700 km (252,700 mi) at perigee (closest) and apogee (farthest), respectively. On 14 November 2016, it was closer to Earth when at full phase than it has been since 1948, 14% closer than its farthest position in apogee. Reported as a "supermoon", this closest point coincided within an hour of a full moon, and it was 30% more luminous than when at its greatest distance because its angular diameter is 14% greater and . At lower levels, the human perception of reduced brightness as a percentage is provided by the following formula:
When the actual reduction is 1.00 / 1.30, or about 0.770, the perceived reduction is about 0.877, or 1.00 / 1.14. This gives a maximum perceived increase of 14% between apogee and perigee moons of the same phase.
There has been historical controversy over whether features on the Moon's surface change over time. Today, many of these claims are thought to be illusory, resulting from observation under different lighting conditions, poor astronomical seeing, or inadequate drawings. However, outgassing does occasionally occur and could be responsible for a minor percentage of the reported lunar transient phenomena. Recently, it has been suggested that a roughly 3 km (1.9 mi) diameter region of the lunar surface was modified by a gas release event about a million years ago.
The Moon's appearance, like the Sun's, can be affected by Earth's atmosphere. Common optical effects are the 22° halo ring, formed when the Moon's light is refracted through the ice crystals of high cirrostratus clouds, and smaller coronal rings when the Moon is seen through thin clouds.
The illuminated area of the visible sphere (degree of illumination) is given by , where is the elongation (i.e., the angle between Moon, the observer (on Earth) and the Sun).
The gravitational attraction that masses have for one another decreases inversely with the square of the distance of those masses from each other. As a result, the slightly greater attraction that the Moon has for the side of Earth closest to the Moon, as compared to the part of the Earth opposite the Moon, results in tidal forces. Tidal forces affect both the Earth's crust and oceans.
The most obvious effect of tidal forces is to cause two bulges in the Earth's oceans, one on the side facing the Moon and the other on the side opposite. This results in elevated sea levels called ocean tides. As the Earth spins on its axis, one of the ocean bulges (high tide) is held in place "under" the Moon, while another such tide is opposite. As a result, there are two high tides, and two low tides in about 24 hours. Since the Moon is orbiting the Earth in the same direction of the Earth's rotation, the high tides occur about every 12 hours and 25 minutes; the 25 minutes is due to the Moon's time to orbit the Earth. The Sun has the same tidal effect on the Earth, but its forces of attraction are only 40% that of the Moon's; the Sun's and Moon's interplay is responsible for spring and neap tides. If the Earth were a water world (one with no continents) it would produce a tide of only one meter, and that tide would be very predictable, but the ocean tides are greatly modified by other effects: the frictional coupling of water to Earth's rotation through the ocean floors, the inertia of water's movement, ocean basins that grow shallower near land, the sloshing of water between different ocean basins. As a result, the timing of the tides at most points on the Earth is a product of observations that are explained, incidentally, by theory.
While gravitation causes acceleration and movement of the Earth's fluid oceans, gravitational coupling between the Moon and Earth's solid body is mostly elastic and plastic. The result is a further tidal effect of the Moon on the Earth that causes a bulge of the solid portion of the Earth nearest the Moon that acts as a torque in opposition to the Earth's rotation. This "drains" angular momentum and rotational kinetic energy from Earth's spin, slowing the Earth's rotation. That angular momentum, lost from the Earth, is transferred to the Moon in a process (confusingly known as tidal acceleration), which lifts the Moon into a higher orbit and results in its lower orbital speed about the Earth. Thus the distance between Earth and Moon is increasing, and the Earth's spin is slowing in reaction. Measurements from laser reflectors left during the Apollo missions (lunar ranging experiments) have found that the Moon's distance increases by 38 mm (1.5 in) per year (roughly the rate at which human fingernails grow). Atomic clocks also show that Earth's day lengthens by about 15 microseconds every year, slowly increasing the rate at which UTC is adjusted by leap seconds. Left to run its course, this tidal drag would continue until the spin of Earth and the orbital period of the Moon matched, creating mutual tidal locking between the two. As a result, the Moon would be suspended in the sky over one meridian, as is already currently the case with Pluto and its moon Charon. However, the Sun will become a red giant engulfing the Earth-Moon system long before this occurrence.
In a like manner, the lunar surface experiences tides of around 10 cm (4 in) amplitude over 27 days, with two components: a fixed one due to Earth, because they are in synchronous rotation, and a varying component from the Sun. The Earth-induced component arises from libration, a result of the Moon's orbital eccentricity (if the Moon's orbit were perfectly circular, there would only be solar tides). Libration also changes the angle from which the Moon is seen, allowing a total of about 59% of its surface to be seen from Earth over time. The cumulative effects of stress built up by these tidal forces produces moonquakes. Moonquakes are much less common and weaker than are earthquakes, although moonquakes can last for up to an hour – significantly longer than terrestrial quakes – because of the absence of water to damp out the seismic vibrations. The existence of moonquakes was an unexpected discovery from seismometers placed on the Moon by Apollo astronauts from 1969 through 1972.
Eclipses only occur when the Sun, Earth, and Moon are all in a straight line (termed "syzygy"). Solar eclipses occur at new moon, when the Moon is between the Sun and Earth. In contrast, lunar eclipses occur at full moon, when Earth is between the Sun and Moon. The apparent size of the Moon is roughly the same as that of the Sun, with both being viewed at close to one-half a degree wide. The Sun is much larger than the Moon but it is the vastly greater distance that gives it the same apparent size as the much closer and much smaller Moon from the perspective of Earth. The variations in apparent size, due to the non-circular orbits, are nearly the same as well, though occurring in different cycles. This makes possible both total (with the Moon appearing larger than the Sun) and annular (with the Moon appearing smaller than the Sun) solar eclipses. In a total eclipse, the Moon completely covers the disc of the Sun and the solar corona becomes visible to the naked eye. Because the distance between the Moon and Earth is very slowly increasing over time, the angular diameter of the Moon is decreasing. Also, as it evolves toward becoming a red giant, the size of the Sun, and its apparent diameter in the sky, are slowly increasing. The combination of these two changes means that hundreds of millions of years ago, the Moon would always completely cover the Sun on solar eclipses, and no annular eclipses were possible. Likewise, hundreds of millions of years in the future, the Moon will no longer cover the Sun completely, and total solar eclipses will not occur.
Because the Moon's orbit around Earth is inclined by about 5.145° (5° 9') to the orbit of Earth around the Sun, eclipses do not occur at every full and new moon. For an eclipse to occur, the Moon must be near the intersection of the two orbital planes. The periodicity and recurrence of eclipses of the Sun by the Moon, and of the Moon by Earth, is described by the saros, which has a period of approximately 18 years.
Because the Moon is continuously blocking our view of a half-degree-wide circular area of the sky, the related phenomenon of occultation occurs when a bright star or planet passes behind the Moon and is occulted: hidden from view. In this way, a solar eclipse is an occultation of the Sun. Because the Moon is comparatively close to Earth, occultations of individual stars are not visible everywhere on the planet, nor at the same time. Because of the precession of the lunar orbit, each year different stars are occulted.
Observation and exploration
One of the earliest-discovered possible depictions of the Moon is a 5000-year-old rock carving Orthostat 47 at Knowth, Ireland.
Understanding of the Moon's cycles was an early development of astronomy: by the 5th century BC, Babylonian astronomers had recorded the 18-year Saros cycle of lunar eclipses, and Indian astronomers had described the Moon's monthly elongation. The Chinese astronomer Shi Shen (fl. 4th century BC) gave instructions for predicting solar and lunar eclipses. Later, the physical form of the Moon and the cause of moonlight became understood. The ancient Greek philosopher Anaxagoras (d. 428 BC) reasoned that the Sun and Moon were both giant spherical rocks, and that the latter reflected the light of the former. Although the Chinese of the Han Dynasty believed the Moon to be energy equated to qi, their 'radiating influence' theory also recognized that the light of the Moon was merely a reflection of the Sun, and Jing Fang (78–37 BC) noted the sphericity of the Moon. In the 2nd century AD, Lucian wrote the novel A True Story, in which the heroes travel to the Moon and meet its inhabitants. In 499 AD, the Indian astronomer Aryabhata mentioned in his Aryabhatiya that reflected sunlight is the cause of the shining of the Moon. The astronomer and physicist Alhazen (965–1039) found that sunlight was not reflected from the Moon like a mirror, but that light was emitted from every part of the Moon's sunlit surface in all directions. Shen Kuo (1031–1095) of the Song dynasty created an allegory equating the waxing and waning of the Moon to a round ball of reflective silver that, when doused with white powder and viewed from the side, would appear to be a crescent.
In Aristotle's (384–322 BC) description of the universe, the Moon marked the boundary between the spheres of the mutable elements (earth, water, air and fire), and the imperishable stars of aether, an influential philosophy that would dominate for centuries. However, in the 2nd century BC, Seleucus of Seleucia correctly theorized that tides were due to the attraction of the Moon, and that their height depends on the Moon's position relative to the Sun. In the same century, Aristarchus computed the size and distance of the Moon from Earth, obtaining a value of about twenty times the radius of Earth for the distance. These figures were greatly improved by Ptolemy (90–168 AD): his values of a mean distance of 59 times Earth's radius and a diameter of 0.292 Earth diameters were close to the correct values of about 60 and 0.273 respectively. Archimedes (287–212 BC) designed a planetarium that could calculate the motions of the Moon and other objects in the Solar System.
During the Middle Ages, before the invention of the telescope, the Moon was increasingly recognised as a sphere, though many believed that it was "perfectly smooth".
In 1609, Galileo Galilei drew one of the first telescopic drawings of the Moon in his book Sidereus Nuncius and noted that it was not smooth but had mountains and craters. Thomas Harriot had made, but not published such drawings a few months earlier. Telescopic mapping of the Moon followed: later in the 17th century, the efforts of Giovanni Battista Riccioli and Francesco Maria Grimaldi led to the system of naming of lunar features in use today. The more exact 1834–36 Mappa Selenographica of Wilhelm Beer and Johann Heinrich Mädler, and their associated 1837 book Der Mond, the first trigonometrically accurate study of lunar features, included the heights of more than a thousand mountains, and introduced the study of the Moon at accuracies possible in earthly geography. Lunar craters, first noted by Galileo, were thought to be volcanic until the 1870s proposal of Richard Proctor that they were formed by collisions. This view gained support in 1892 from the experimentation of geologist Grove Karl Gilbert, and from comparative studies from 1920 to the 1940s, leading to the development of lunar stratigraphy, which by the 1950s was becoming a new and growing branch of astrogeology.
The Cold War-inspired Space Race between the Soviet Union and the U.S. led to an acceleration of interest in exploration of the Moon. Once launchers had the necessary capabilities, these nations sent unmanned probes on both flyby and impact/lander missions. Spacecraft from the Soviet Union's Luna program were the first to accomplish a number of goals: following three unnamed, failed missions in 1958, the first human-made object to escape Earth's gravity and pass near the Moon was Luna 1; the first human-made object to impact the lunar surface was Luna 2, and the first photographs of the normally occluded far side of the Moon were made by Luna 3, all in 1959.
The first spacecraft to perform a successful lunar soft landing was Luna 9 and the first unmanned vehicle to orbit the Moon was Luna 10, both in 1966. Rock and soil samples were brought back to Earth by three Luna sample return missions (Luna 16 in 1970, Luna 20 in 1972, and Luna 24 in 1976), which returned 0.3 kg total. Two pioneering robotic rovers landed on the Moon in 1970 and 1973 as a part of Soviet Lunokhod programme.
Luna 24 was the last Soviet mission to the Moon.
United States missions
During the late 1950s at the height of the Cold War, the United States Army conducted a classified feasibility study that proposed the construction of a manned military outpost on the Moon called Project Horizon with the potential to conduct a wide range of missions from scientific research to nuclear Earth bombardment. The study included the possibility of conducting a lunar-based nuclear test. The Air Force, which at the time was in competition with the Army for a leading role in the space program, developed its own similar plan called Lunex. However, both these proposals were ultimately passed over as the space program was largely transferred from the military to the civilian agency NASA.
Following President John F. Kennedy's 1961 commitment to a manned moon landing before the end of the decade, the United States, under NASA leadership, launched a series of unmanned probes to develop an understanding of the lunar surface in preparation for manned missions: the Jet Propulsion Laboratory's Ranger program produced the first close-up pictures; the Lunar Orbiter program produced maps of the entire Moon; the Surveyor program landed its first spacecraft four months after Luna 9. The manned Apollo program was developed in parallel; after a series of unmanned and manned tests of the Apollo spacecraft in Earth orbit, and spurred on by a potential Soviet lunar flight, in 1968 Apollo 8 made the first manned mission to lunar orbit. The subsequent landing of the first humans on the Moon in 1969 is seen by many as the culmination of the Space Race.
Neil Armstrong became the first person to walk on the Moon as the commander of the American mission Apollo 11 by first setting foot on the Moon at 02:56 UTC on 21 July 1969. An estimated 500 million people worldwide watched the transmission by the Apollo TV camera, the largest television audience for a live broadcast at that time. The Apollo missions 11 to 17 (except Apollo 13, which aborted its planned lunar landing) removed 380.05 kilograms (837.87 lb) of lunar rock and soil in 2,196 separate samples. The American Moon landing and return was enabled by considerable technological advances in the early 1960s, in domains such as ablation chemistry, software engineering, and atmospheric re-entry technology, and by highly competent management of the enormous technical undertaking.
Scientific instrument packages were installed on the lunar surface during all the Apollo landings. Long-lived instrument stations, including heat flow probes, seismometers, and magnetometers, were installed at the Apollo 12, 14, 15, 16, and 17 landing sites. Direct transmission of data to Earth concluded in late 1977 because of budgetary considerations, but as the stations' lunar laser ranging corner-cube retroreflector arrays are passive instruments, they are still being used. Ranging to the stations is routinely performed from Earth-based stations with an accuracy of a few centimeters, and data from this experiment are being used to place constraints on the size of the lunar core.
After the first Moon race there were years of near quietude but starting in the 1990s, many more countries have become involved in direct exploration of the Moon. In 1990, Japan became the third country to place a spacecraft into lunar orbit with its Hiten spacecraft. The spacecraft released a smaller probe, Hagoromo, in lunar orbit, but the transmitter failed, preventing further scientific use of the mission. In 1994, the U.S. sent the joint Defense Department/NASA spacecraft Clementine to lunar orbit. This mission obtained the first near-global topographic map of the Moon, and the first global multispectral images of the lunar surface. This was followed in 1998 by the Lunar Prospector mission, whose instruments indicated the presence of excess hydrogen at the lunar poles, which is likely to have been caused by the presence of water ice in the upper few meters of the regolith within permanently shadowed craters.
The European spacecraft SMART-1, the second ion-propelled spacecraft, was in lunar orbit from 15 November 2004 until its lunar impact on 3 September 2006, and made the first detailed survey of chemical elements on the lunar surface.
The ambitious Chinese Lunar Exploration Program began with Chang'e 1, which successfully orbited the Moon from 5 November 2007 until its controlled lunar impact on 1 March 2009. It obtained a full image map of the Moon. Chang'e 2, beginning in October 2010, reached the Moon more quickly, mapped the Moon at a higher resolution over an eight-month period, then left lunar orbit for an extended stay at the Earth–Sun L2 Lagrangian point, before finally performing a flyby of asteroid 4179 Toutatis on 13 December 2012, and then heading off into deep space. On 14 December 2013, Chang'e 3 landed a lunar lander onto the Moon's surface, which in turn deployed a lunar rover, named Yutu (Chinese: 玉兔; literally "Jade Rabbit"). This was the first lunar soft landing since Luna 24 in 1976, and the first lunar rover mission since Lunokhod 2 in 1973. Another rover mission (Chang'e 4) was launched in 2019, becoming the first ever spacecraft to land on the Moon's far side. China intends to following this up with a sample return mission (Chang'e 5) in 2020.
Between 4 October 2007 and 10 June 2009, the Japan Aerospace Exploration Agency's Kaguya (Selene) mission, a lunar orbiter fitted with a high-definition video camera, and two small radio-transmitter satellites, obtained lunar geophysics data and took the first high-definition movies from beyond Earth orbit. India's first lunar mission, Chandrayaan-1, orbited from 8 November 2008 until loss of contact on 27 August 2009, creating a high-resolution chemical, mineralogical and photo-geological map of the lunar surface, and confirming the presence of water molecules in lunar soil. The Indian Space Research Organisation planned to launch Chandrayaan-2 in 2013, which would have included a Russian robotic lunar rover. However, the failure of Russia's Fobos-Grunt mission has delayed this project, and was launched on 22 July 2019. The lander Vikram attempted to land on the lunar south pole regime on 6 September, but lost the signal in 2.1 km (1.3 mi). What happened after that is unknown.
The U.S. co-launched the Lunar Reconnaissance Orbiter (LRO) and the LCROSS impactor and follow-up observation orbiter on 18 June 2009; LCROSS completed its mission by making a planned and widely observed impact in the crater Cabeus on 9 October 2009, whereas LRO is currently in operation, obtaining precise lunar altimetry and high-resolution imagery. In November 2011, the LRO passed over the large and bright Aristarchus crater. NASA released photos of the crater on 25 December 2011.
Two NASA GRAIL spacecraft began orbiting the Moon around 1 January 2012, on a mission to learn more about the Moon's internal structure. NASA's LADEE probe, designed to study the lunar exosphere, achieved orbit on 6 October 2013.
Upcoming lunar missions include Russia's Luna-Glob: an unmanned lander with a set of seismometers, and an orbiter based on its failed Martian Fobos-Grunt mission. Privately funded lunar exploration has been promoted by the Google Lunar X Prize, announced 13 September 2007, which offers US$20 million to anyone who can land a robotic rover on the Moon and meet other specified criteria. Shackleton Energy Company is building a program to establish operations on the south pole of the Moon to harvest water and supply their Propellant Depots.
NASA began to plan to resume manned missions following the call by U.S. President George W. Bush on 14 January 2004 for a manned mission to the Moon by 2019 and the construction of a lunar base by 2024. The Constellation program was funded and construction and testing begun on a manned spacecraft and launch vehicle, and design studies for a lunar base. However, that program has been canceled in favor of a manned asteroid landing by 2025 and a manned Mars orbit by 2035. India has also expressed its hope to send a manned mission to the Moon by 2020.
On 28 February 2018, SpaceX, Vodafone, Nokia and Audi announced a collaboration to install a 4G wireless communication network on the Moon, with the aim of streaming live footage on the surface to Earth.
Planned commercial missions
In 2007, the X Prize Foundation together with Google launched the Google Lunar X Prize to encourage commercial endeavors to the Moon. A prize of $20 million was to be awarded to the first private venture to get to the Moon with a robotic lander by the end of March 2018, with additional prizes worth $10 million for further milestones. As of August 2016, 16 teams were reportedly participating in the competition. In January 2018 the foundation announced that the prize would go unclaimed as none of the finalist teams would be able to make a launch attempt by the deadline.
In August 2016, the US government granted permission to US-based start-up Moon Express to land on the Moon. This marked the first time that a private enterprise was given the right to do so. The decision is regarded as a precedent helping to define regulatory standards for deep-space commercial activity in the future, as thus far companies' operation had been restricted to being on or around Earth.
On 29 November 2018 NASA announced that nine commercial companies would compete to win a contract to send small payloads to the Moon in what is known as Commercial Lunar Payload Services. According to NASA administrator Jim Bridenstine, "We are building a domestic American capability to get back and forth to the surface of the moon.".
Astronomy from the Moon
For many years, the Moon has been recognized as an excellent site for telescopes. It is relatively nearby; astronomical seeing is not a concern; certain craters near the poles are permanently dark and cold, and thus especially useful for infrared telescopes; and radio telescopes on the far side would be shielded from the radio chatter of Earth. The lunar soil, although it poses a problem for any moving parts of telescopes, can be mixed with carbon nanotubes and epoxies and employed in the construction of mirrors up to 50 meters in diameter. A lunar zenith telescope can be made cheaply with an ionic liquid.
Although Luna landers scattered pennants of the Soviet Union on the Moon, and U.S. flags were symbolically planted at their landing sites by the Apollo astronauts, no nation claims ownership of any part of the Moon's surface. Russia, China, and the U.S. are party to the 1967 Outer Space Treaty, which defines the Moon and all outer space as the "province of all mankind". This treaty also restricts the use of the Moon to peaceful purposes, explicitly banning military installations and weapons of mass destruction. The 1979 Moon Agreement was created to restrict the exploitation of the Moon's resources by any single nation, but as of November 2016, it has been signed and ratified by only 18 nations, none of which engages in self-launched human space exploration or has plans to do so. Although several individuals have made claims to the Moon in whole or in part, none of these are considered credible.
The contrast between the brighter highlands and the darker maria creates the patterns seen by different cultures as the Man in the Moon, the rabbit and the buffalo, among others. In many prehistoric and ancient cultures, the Moon was personified as a deity or other supernatural phenomenon, and astrological views of the Moon continue to be propagated today.
In Proto-Indo-European religion, the moon was personified as the male god *Meh1not. The ancient Sumerians believed that the Moon was the god Nanna, who was the father of Inanna, the goddess of the planet Venus, and Utu, the god of the sun. Nanna was later known as Sîn, and was particularly associated with magic and sorcery. In Greco-Roman mythology, the Sun and the Moon are represented as male and female, respectively (Helios/Sol and Selene/Luna); this is a development unique to the eastern Mediterranean and traces of an earlier male moon god in the Greek tradition are preserved in the figure of Menelaus.
In Mesopotamian iconography, the crescent was the primary symbol of Nanna-Sîn. In ancient Greek art, the Moon goddess Selene was represented wearing a crescent on her headgear in an arrangement reminiscent of horns. The star and crescent arrangement also goes back to the Bronze Age, representing either the Sun and Moon, or the Moon and planet Venus, in combination. It came to represent the goddess Artemis or Hecate, and via the patronage of Hecate came to be used as a symbol of Byzantium.
An iconographic tradition of representing Sun and Moon with faces developed in the late medieval period.
The Moon's regular phases make it a very convenient timepiece, and the periods of its waxing and waning form the basis of many of the oldest calendars. Tally sticks, notched bones dating as far back as 20–30,000 years ago, are believed by some to mark the phases of the Moon. The ~30-day month is an approximation of the lunar cycle. The English noun month and its cognates in other Germanic languages stem from Proto-Germanic *mǣnṓth-, which is connected to the above-mentioned Proto-Germanic *mǣnōn, indicating the usage of a lunar calendar among the Germanic peoples (Germanic calendar) prior to the adoption of a solar calendar. The PIE root of moon, *méh1nōt, derives from the PIE verbal root *meh1-, "to measure", "indicat[ing] a functional conception of the Moon, i.e. marker of the month" (cf. the English words measure and menstrual), and echoing the Moon's importance to many ancient cultures in measuring time (see Latin mensis and Ancient Greek μείς (meis) or μήν (mēn), meaning "month"). Most historical calendars are lunisolar. The 7th-century Islamic calendar is an exceptional example of a purely lunar calendar. Months are traditionally determined by the visual sighting of the hilal, or earliest crescent moon, over the horizon.
The Moon has long been associated with insanity and irrationality; the words lunacy and lunatic (popular shortening loony) are derived from the Latin name for the Moon, Luna. Philosophers Aristotle and Pliny the Elder argued that the full moon induced insanity in susceptible individuals, believing that the brain, which is mostly water, must be affected by the Moon and its power over the tides, but the Moon's gravity is too slight to affect any single person. Even today, people who believe in a lunar effect claim that admissions to psychiatric hospitals, traffic accidents, homicides or suicides increase during a full moon, but dozens of studies invalidate these claims.
- Between 18.29° and 28.58° to Earth's equator.
- There are a number of near-Earth asteroids, including 3753 Cruithne, that are co-orbital with Earth: their orbits bring them close to Earth for periods of time but then alter in the long term (Morais et al, 2002). These are quasi-satellites – they are not moons as they do not orbit Earth. For more information, see Other moons of Earth.
- The maximum value is given based on scaling of the brightness from the value of −12.74 given for an equator to Moon-centre distance of 378 000 km in the NASA factsheet reference to the minimum Earth–Moon distance given there, after the latter is corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km. The minimum value (for a distant new moon) is based on a similar scaling using the maximum Earth–Moon distance of 407 000 km (given in the factsheet) and by calculating the brightness of the earthshine onto such a new moon. The brightness of the earthshine is [ Earth albedo × (Earth radius / Radius of Moon's orbit)2 ] relative to the direct solar illumination that occurs for a full moon. (Earth albedo = 0.367; Earth radius = (polar radius × equatorial radius)½ = 6 367 km.)
- The range of angular size values given are based on simple scaling of the following values given in the fact sheet reference: at an Earth-equator to Moon-centre distance of 378 000 km, the angular size is 1896 arcseconds. The same fact sheet gives extreme Earth–Moon distances of 407 000 km and 357 000 km. For the maximum angular size, the minimum distance has to be corrected for Earth's equatorial radius of 6 378 km, giving 350 600 km.
- Lucey et al. (2006) give 107 particles cm−3 by day and 105 particles cm−3 by night. Along with equatorial surface temperatures of 390 K by day and 100 K by night, the ideal gas law yields the pressures given in the infobox (rounded to the nearest order of magnitude): 10−7 Pa by day and 10−10 Pa by night.
- This age is calculated from isotope dating of lunar zircons.
- More accurately, the Moon's mean sidereal period (fixed star to fixed star) is 27.321661 days (27 d 07 h 43 min 11.5 s), and its mean tropical orbital period (from equinox to equinox) is 27.321582 days (27 d 07 h 43 min 04.7 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107).
- More accurately, the Moon's mean synodic period (between mean solar conjunctions) is 29.530589 days (29 d 12 h 44 min 02.9 s) (Explanatory Supplement to the Astronomical Ephemeris, 1961, at p.107).
- There is no strong correlation between the sizes of planets and the sizes of their satellites. Larger planets tend to have more satellites, both large and small, than smaller planets.
- With 27% the diameter and 60% the density of Earth, the Moon has 1.23% of the mass of Earth. The moon Charon is larger relative to its primary Pluto, but Pluto is now considered to be a dwarf planet.
- The Sun's apparent magnitude is −26.7, while the full moon's apparent magnitude is −12.7.
- See graph in Sun#Life phases. At present, the diameter of the Sun is increasing at a rate of about five percent per billion years. This is very similar to the rate at which the apparent angular diameter of the Moon is decreasing as it recedes from Earth.
- On average, the Moon covers an area of 0.21078 square degrees on the night sky.
- Wieczorek, Mark A.; et al. (2006). "The constitution and structure of the lunar interior". Reviews in Mineralogy and Geochemistry. 60 (1): 221–364. Bibcode:2006RvMG...60..221W. doi:10.2138/rmg.2006.60.3.
- Lang, Kenneth R. (2011), The Cambridge Guide to the Solar System Archived 1 January 2016 at the Wayback Machine, 2nd ed., Cambridge University Press.
- Morais, M.H.M.; Morbidelli, A. (2002). "The Population of Near-Earth Asteroids in Coorbital Motion with the Earth". Icarus. 160 (1): 1–9. Bibcode:2002Icar..160....1M. doi:10.1006/icar.2002.6937.
- Williams, Dr. David R. (2 February 2006). "Moon Fact Sheet". NASA/National Space Science Data Center. Archived from the original on 23 March 2010. Retrieved 31 December 2008.
- Smith, David E.; Zuber, Maria T.; Neumann, Gregory A.; Lemoine, Frank G. (1 January 1997). "Topography of the Moon from the Clementine lidar". Journal of Geophysical Research. 102 (E1): 1601. Bibcode:1997JGR...102.1591S. doi:10.1029/96JE02940. hdl:2060/19980018849.
- Terry 2013, p. 226.
- Williams, James G.; Newhall, XX; Dickey, Jean O. (1996). "Lunar moments, tides, orientation, and coordinate frames". Planetary and Space Science. 44 (10): 1077–1080. Bibcode:1996P&SS...44.1077W. doi:10.1016/0032-0633(95)00154-9.
- Makemson, Maud W. (1971); "Determination of Selenographic Positions", in The Moon, volume 2, issue 3, pp. 293-308, doi:10.1007/BF00561882, BibCode: 1971Moon....2..293M
- Archinal, Brent A.; A'Hearn, Michael F.; Bowell, Edward G.; Conrad, Albert R.; Consolmagno, Guy J.; et al. (2010). "Report of the IAU Working Group on Cartographic Coordinates and Rotational Elements: 2009" (PDF). Celestial Mechanics and Dynamical Astronomy. 109 (2): 101–135. Bibcode:2011CeMDA.109..101A. doi:10.1007/s10569-010-9320-4. Archived from the original (PDF) on 4 March 2016. Retrieved 24 September 2018.
- Matthews, Grant (2008). "Celestial body irradiance determination from an underfilled satellite radiometer: application to albedo and thermal emission measurements of the Moon using CERES". Applied Optics. 47 (27): 4981–4993. Bibcode:2008ApOpt..47.4981M. doi:10.1364/AO.47.004981. PMID 18806861.
- A.R. Vasavada; D.A. Paige & S.E. Wood (1999). "Near-Surface Temperatures on Mercury and the Moon and the Stability of Polar Ice Deposits". Icarus. 141 (2): 179–193. Bibcode:1999Icar..141..179V. doi:10.1006/icar.1999.6175.
- Lucey, Paul; Korotev, Randy L.; et al. (2006). "Understanding the lunar surface and space-Moon interactions". Reviews in Mineralogy and Geochemistry. 60 (1): 83–219. Bibcode:2006RvMG...60...83L. doi:10.2138/rmg.2006.60.2.
- https://www.universetoday.com/143025/the-moon-is-older-than-scientists-thought/ Universe Today
- "How far away is the moon?". Space Place. NASA. Archived from the original on 6 October 2016.
- Scott, Elaine (2016). Our Moon: New Discoveries About Earth's Closest Companion. Houghton Mifflin Harcourt. p. 7. ISBN 978-0-544-75058-6.
- Collins English Dictionary
- Oxford Living Dictionaries
- Meaning of “moon” in the English Dictionary Cambridge Learner's Dictionary
- "Naming Astronomical Objects: Spelling of Names". International Astronomical Union. Archived from the original on 16 December 2008. Retrieved 29 March 2010.
- "Gazetteer of Planetary Nomenclature: Planetary Nomenclature FAQ". USGS Astrogeology Research Program. Archived from the original on 27 May 2010. Retrieved 29 March 2010.
- The American Heritage Dictionary Indo-European Roots Appendix
- Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1.
- Oxford English Dictionary, 2nd ed. "luna", Oxford University Press (Oxford), 2009.
- American Heritage Dictionary
- Collins English Dictionary
- Oxford Living Dictionaries
- "Oxford English Dictionary: lunar, a. and n." Oxford English Dictionary: Second Edition 1989. Oxford University Press. Retrieved 23 March 2010.
- σελήνη. Liddell, Henry George; Scott, Robert; A Greek–English Lexicon at the Perseus Project.
- Imke Pannen (2010). When the Bad Bleeds: Mantic Elements in English Renaissance Revenge Tragedy. V&R unipress GmbH. pp. 96–. ISBN 978-3-89971-640-5. Archived from the original on 4 September 2016.
- Barboni, M.; Boehnke, P.; Keller, C.B.; Kohl, I.E.; Schoene, B.; Young, E.D.; McKeegan, K.D. (2017). "Early formation of the Moon 4.51 billion years ago". Science Advances. 3 (1): e1602365. Bibcode:2017SciA....3E2365B. doi:10.1126/sciadv.1602365. PMC 5226643. PMID 28097222.
- Binder, A.B. (1974). "On the origin of the Moon by rotational fission". The Moon. 11 (2): 53–76. Bibcode:1974Moon...11...53B. doi:10.1007/BF01877794.
- Stroud, Rick (2009). The Book of the Moon. Walken and Company. pp. 24–27. ISBN 978-0-8027-1734-4.
- Mitler, H.E. (1975). "Formation of an iron-poor moon by partial capture, or: Yet another exotic theory of lunar origin". Icarus. 24 (2): 256–268. Bibcode:1975Icar...24..256M. doi:10.1016/0019-1035(75)90102-5.
- Stevenson, D.J. (1987). "Origin of the moon–The collision hypothesis". Annual Review of Earth and Planetary Sciences. 15 (1): 271–315. Bibcode:1987AREPS..15..271S. doi:10.1146/annurev.ea.15.050187.001415.
- Taylor, G. Jeffrey (31 December 1998). "Origin of the Earth and Moon". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 10 June 2010. Retrieved 7 April 2010.
- "Asteroids Bear Scars of Moon's Violent Formation". 16 April 2015. Archived from the original on 8 October 2016.
- Dana Mackenzie (21 July 2003). The Big Splat, or How Our Moon Came to Be. John Wiley & Sons. pp. 166–168. ISBN 978-0-471-48073-0. Archived from the original on 1 January 2016.
- Canup, R.; Asphaug, E. (2001). "Origin of the Moon in a giant impact near the end of Earth's formation". Nature. 412 (6848): 708–712. Bibcode:2001Natur.412..708C. doi:10.1038/35089010. PMID 11507633.
- "Earth-Asteroid Collision Formed Moon Later Than Thought". National Geographic. 28 October 2010. Archived from the original on 18 April 2009. Retrieved 7 May 2012.
- Kleine, Thorsten (2008). "2008 Pellas-Ryder Award for Mathieu Touboul" (PDF). Meteoritics and Planetary Science. 43 (S7): A11–A12. Bibcode:2008M&PS...43...11K. doi:10.1111/j.1945-5100.2008.tb00709.x.
- Touboul, M.; Kleine, T.; Bourdon, B.; Palme, H.; Wieler, R. (2007). "Late formation and prolonged differentiation of the Moon inferred from W isotopes in lunar metals". Nature. 450 (7173): 1206–1209. Bibcode:2007Natur.450.1206T. doi:10.1038/nature06428. PMID 18097403.
- "Flying Oceans of Magma Help Demystify the Moon's Creation". National Geographic. 8 April 2015. Archived from the original on 9 April 2015.
- Pahlevan, Kaveh; Stevenson, David J. (2007). "Equilibration in the aftermath of the lunar-forming giant impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055.
- Nield, Ted (2009). "Moonwalk (summary of meeting at Meteoritical Society's 72nd Annual Meeting, Nancy, France)". Geoscientist. Vol. 19. p. 8. Archived from the original on 27 September 2012.
- Warren, P.H. (1985). "The magma ocean concept and lunar evolution". Annual Review of Earth and Planetary Sciences. 13 (1): 201–240. Bibcode:1985AREPS..13..201W. doi:10.1146/annurev.ea.13.050185.001221.
- Tonks, W. Brian; Melosh, H. Jay (1993). "Magma ocean formation due to giant impacts". Journal of Geophysical Research. 98 (E3): 5319–5333. Bibcode:1993JGR....98.5319T. doi:10.1029/92JE02726.
- Daniel Clery (11 October 2013). "Impact Theory Gets Whacked". Science. 342 (6155): 183–185. Bibcode:2013Sci...342..183C. doi:10.1126/science.342.6155.183. PMID 24115419.
- Wiechert, U.; et al. (October 2001). "Oxygen Isotopes and the Moon-Forming Giant Impact". Science. 294 (12): 345–348. Bibcode:2001Sci...294..345W. doi:10.1126/science.1063037. PMID 11598294. Archived from the original on 20 April 2009. Retrieved 5 July 2009.
- Pahlevan, Kaveh; Stevenson, David (October 2007). "Equilibration in the Aftermath of the Lunar-forming Giant Impact". Earth and Planetary Science Letters. 262 (3–4): 438–449. arXiv:1012.5323. Bibcode:2007E&PSL.262..438P. doi:10.1016/j.epsl.2007.07.055.
- "Titanium Paternity Test Says Earth is the Moon's Only Parent (University of Chicago)". Astrobio.net. 5 April 2012. Retrieved 3 October 2013.
- Garrick-Bethell et al. (2014) "The tidal-rotational shape of the Moon and evidence for polar wander", Nature 512, 181–184.
- Taylor, Stuart R. (1975). Lunar Science: a Post-Apollo View. Oxford: Pergamon Press. p. 64. ISBN 978-0-08-018274-2.
- Brown, D.; Anderson, J. (6 January 2011). "NASA Research Team Reveals Moon Has Earth-Like Core". NASA. NASA. Archived from the original on 11 January 2012.
- Weber, R.C.; Lin, P.-Y.; Garnero, E.J.; Williams, Q.; Lognonne, P. (21 January 2011). "Seismic Detection of the Lunar Core" (PDF). Science. 331 (6015): 309–312. Bibcode:2011Sci...331..309W. doi:10.1126/science.1199375. PMID 21212323. Archived from the original (PDF) on 15 October 2015. Retrieved 10 April 2017.
- Nemchin, A.; Timms, N.; Pidgeon, R.; Geisler, T.; Reddy, S.; Meyer, C. (2009). "Timing of crystallization of the lunar magma ocean constrained by the oldest zircon". Nature Geoscience. 2 (2): 133–136. Bibcode:2009NatGe...2..133N. doi:10.1038/ngeo417. hdl:20.500.11937/44375.
- Shearer, Charles K.; et al. (2006). "Thermal and magmatic evolution of the Moon". Reviews in Mineralogy and Geochemistry. 60 (1): 365–518. Bibcode:2006RvMG...60..365S. doi:10.2138/rmg.2006.60.4.
- Schubert, J. (2004). "Interior composition, structure, and dynamics of the Galilean satellites.". In F. Bagenal; et al. (eds.). Jupiter: The Planet, Satellites, and Magnetosphere. Cambridge University Press. pp. 281–306. ISBN 978-0-521-81808-7.
- Williams, J.G.; Turyshev, S.G.; Boggs, D.H.; Ratcliff, J.T. (2006). "Lunar laser ranging science: Gravitational physics and lunar interior and geodesy". Advances in Space Research. 37 (1): 67–71. arXiv:gr-qc/0412049. Bibcode:2006AdSpR..37...67W. doi:10.1016/j.asr.2005.05.013.
- Spudis, Paul D.; Cook, A.; Robinson, M.; Bussey, B.; Fessler, B. (January 1998). "Topography of the South Polar Region from Clementine Stereo Imaging". Workshop on New Views of the Moon: Integrated Remotely Sensed, Geophysical, and Sample Datasets: 69. Bibcode:1998nvmi.conf...69S.
- Spudis, Paul D.; Reisse, Robert A.; Gillis, Jeffrey J. (1994). "Ancient Multiring Basins on the Moon Revealed by Clementine Laser Altimetry". Science. 266 (5192): 1848–1851. Bibcode:1994Sci...266.1848S. doi:10.1126/science.266.5192.1848. PMID 17737079.
- Pieters, C.M.; Tompkins, S.; Head, J.W.; Hess, P.C. (1997). "Mineralogy of the Mafic Anomaly in the South Pole‐Aitken Basin: Implications for excavation of the lunar mantle". Geophysical Research Letters. 24 (15): 1903–1906. Bibcode:1997GeoRL..24.1903P. doi:10.1029/97GL01718. hdl:2060/19980018038.
- Taylor, G.J. (17 July 1998). "The Biggest Hole in the Solar System". Planetary Science Research Discoveries: 20. Bibcode:1998psrd.reptE..20T. Archived from the original on 20 August 2007. Retrieved 12 April 2007.
- Schultz, P.H. (March 1997). "Forming the south-pole Aitken basin – The extreme games". Conference Paper, 28th Annual Lunar and Planetary Science Conference. 28: 1259. Bibcode:1997LPI....28.1259S.
- "NASA's LRO Reveals 'Incredible Shrinking Moon'". NASA. 19 August 2010. Archived from the original on 21 August 2010.
- Watters, Thomas R.; Weber, Renee C.; Collins, Geoffrey C.; Howley, Ian J.; Schmerr, Nicholas C.; Johnson, Catherine L. (June 2019). "Shallow seismic activity and young thrust faults on the Moon". Nature Geoscience (published 13 May 2019). 12 (6): 411–417. doi:10.1038/s41561-019-0362-2. ISSN 1752-0894.
- Wlasuk, Peter (2000). Observing the Moon. Springer. p. 19. ISBN 978-1-85233-193-1.
- Norman, M. (21 April 2004). "The Oldest Moon Rocks". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology. Archived from the original on 18 April 2007. Retrieved 12 April 2007.
- Head, L.W.J.W. (2003). "Lunar Gruithuisen and Mairan domes: Rheology and mode of emplacement". Journal of Geophysical Research. 108 (E2): 5012. Bibcode:2003JGRE..108.5012W. doi:10.1029/2002JE001909. Archived from the original on 12 March 2007. Retrieved 12 April 2007.
- Spudis, P.D. (2004). "Moon". World Book Online Reference Center, NASA. Archived from the original on 3 July 2013. Retrieved 12 April 2007.
- Gillis, J.J.; Spudis, P.D. (1996). "The Composition and Geologic Setting of Lunar Far Side Maria". Lunar and Planetary Science. 27: 413. Bibcode:1996LPI....27..413G.
- Lawrence, D.J., et al. (11 August 1998). "Global Elemental Maps of the Moon: The Lunar Prospector Gamma-Ray Spectrometer". Science. 281 (5382): 1484–1489. Bibcode:1998Sci...281.1484L. doi:10.1126/science.281.5382.1484. PMID 9727970. Archived from the original on 16 May 2009. Retrieved 29 August 2009.
- Taylor, G.J. (31 August 2000). "A New Moon for the Twenty-First Century". Planetary Science Research Discoveries: 41. Bibcode:2000psrd.reptE..41T. Archived from the original on 1 March 2012. Retrieved 12 April 2007.
- Papike, J.; Ryder, G.; Shearer, C. (1998). "Lunar Samples". Reviews in Mineralogy and Geochemistry. 36: 5.1–5.234.
- Hiesinger, H.; Head, J.W.; Wolf, U.; Jaumanm, R.; Neukum, G. (2003). "Ages and stratigraphy of mare basalts in Oceanus Procellarum, Mare Numbium, Mare Cognitum, and Mare Insularum". Journal of Geophysical Research. 108 (E7): 1029. Bibcode:2003JGRE..108.5065H. doi:10.1029/2002JE001985.
- Phil Berardelli (9 November 2006). "Long Live the Moon!". Science. Archived from the original on 18 October 2014.
- Jason Major (14 October 2014). "Volcanoes Erupted 'Recently' on the Moon". Discovery News. Archived from the original on 16 October 2014.
- "NASA Mission Finds Widespread Evidence of Young Lunar Volcanism". NASA. 12 October 2014. Archived from the original on 3 January 2015.
- Eric Hand (12 October 2014). "Recent volcanic eruptions on the moon". Science. Archived from the original on 14 October 2014.
- Braden, S.E.; Stopar, J.D.; Robinson, M.S.; Lawrence, S.J.; van der Bogert, C.H.; Hiesinger, H.doi=10.1038/ngeo2252 (2014). "Evidence for basaltic volcanism on the Moon within the past 100 million years". Nature Geoscience. 7 (11): 787–791. Bibcode:2014NatGe...7..787B. doi:10.1038/ngeo2252.
- Srivastava, N.; Gupta, R.P. (2013). "Young viscous flows in the Lowell crater of Orientale basin, Moon: Impact melts or volcanic eruptions?". Planetary and Space Science. 87: 37–45. Bibcode:2013P&SS...87...37S. doi:10.1016/j.pss.2013.09.001.
- Gupta, R.P.; Srivastava, N.; Tiwari, R.K. (2014). "Evidences of relatively new volcanic flows on the Moon". Current Science. 107 (3): 454–460.
- Whitten, J.; et al. (2011). "Lunar mare deposits associated with the Orientale impact basin: New insights into mineralogy, history, mode of emplacement, and relation to Orientale Basin evolution from Moon Mineralogy Mapper (M3) data from Chandrayaan-1". Journal of Geophysical Research. 116: E00G09. Bibcode:2011JGRE..116.0G09W. doi:10.1029/2010JE003736.
- Cho, Y.; et al. (2012). "Young mare volcanism in the Orientale region contemporary with the Procellarum KREEP Terrane (PKT) volcanism peak period 2 b.y. ago". Geophysical Research Letters. 39 (11): L11203. Bibcode:2012GeoRL..3911203C. doi:10.1029/2012GL051838.
- Munsell, K. (4 December 2006). "Majestic Mountains". Solar System Exploration. NASA. Archived from the original on 17 September 2008. Retrieved 12 April 2007.
- Richard Lovett (2011). "Early Earth may have had two moons : Nature News". Nature. doi:10.1038/news.2011.456. Archived from the original on 3 November 2012. Retrieved 1 November 2012.
- "Was our two-faced moon in a small collision?". Theconversation.edu.au. Archived from the original on 30 January 2013. Retrieved 1 November 2012.
- Melosh, H. J. (1989). Impact cratering: A geologic process. Oxford University Press. ISBN 978-0-19-504284-9.
- "Moon Facts". SMART-1. European Space Agency. 2010. Retrieved 12 May 2010.
- Wilhelms, Don (1987). "Relative Ages" (PDF). Geologic History of the Moon. U.S. Geological Survey. Archived (PDF) from the original on 11 June 2010.
- Hartmann, William K.; Quantin, Cathy; Mangold, Nicolas (2007). "Possible long-term decline in impact rates: 2. Lunar impact-melt data regarding impact history". Icarus. 186 (1): 11–23. Bibcode:2007Icar..186...11H. doi:10.1016/j.icarus.2006.09.009.
- "The Smell of Moondust". NASA. 30 January 2006. Archived from the original on 8 March 2010. Retrieved 15 March 2010.
- Heiken, G. (1991). Vaniman, D.; French, B. (eds.). Lunar Sourcebook, a user's guide to the Moon. New York: Cambridge University Press. p. 736. ISBN 978-0-521-33444-0.
- Rasmussen, K.L.; Warren, P.H. (1985). "Megaregolith thickness, heat flow, and the bulk composition of the Moon". Nature. 313 (5998): 121–124. Bibcode:1985Natur.313..121R. doi:10.1038/313121a0.
- Boyle, Rebecca. "The moon has hundreds more craters than we thought". Archived from the original on 13 October 2016.
- Speyerer, Emerson J.; Povilaitis, Reinhold Z.; Robinson, Mark S.; Thomas, Peter C.; Wagner, Robert V. (13 October 2016). "Quantifying crater production and regolith overturn on the Moon with temporal imaging". Nature. 538 (7624): 215–218. Bibcode:2016Natur.538..215S. doi:10.1038/nature19829. PMID 27734864.
- Margot, J.L.; Campbell, D.B.; Jurgens, R.F.; Slade, M.A. (4 June 1999). "Topography of the Lunar Poles from Radar Interferometry: A Survey of Cold Trap Locations" (PDF). Science. 284 (5420): 1658–1660. Bibcode:1999Sci...284.1658M. CiteSeerX 10.1.1.485.312. doi:10.1126/science.284.5420.1658. PMID 10356393.
- Ward, William R. (1 August 1975). "Past Orientation of the Lunar Spin Axis". Science. 189 (4200): 377–379. Bibcode:1975Sci...189..377W. doi:10.1126/science.189.4200.377. PMID 17840827.
- Martel, L.M.V. (4 June 2003). "The Moon's Dark, Icy Poles". Planetary Science Research Discoveries: 73. Bibcode:2003psrd.reptE..73M. Archived from the original on 1 March 2012. Retrieved 12 April 2007.
- Seedhouse, Erik (2009). Lunar Outpost: The Challenges of Establishing a Human Settlement on the Moon. Springer-Praxis Books in Space Exploration. Germany: Springer Praxis. p. 136. ISBN 978-0-387-09746-6.
- Coulter, Dauna (18 March 2010). "The Multiplying Mystery of Moonwater". NASA. Archived from the original on 13 December 2012. Retrieved 28 March 2010.
- Spudis, P. (6 November 2006). "Ice on the Moon". The Space Review. Archived from the original on 22 February 2007. Retrieved 12 April 2007.
- Feldman, W.C.; S. Maurice; A.B. Binder; B.L. Barraclough; R.C. Elphic; D.J. Lawrence (1998). "Fluxes of Fast and Epithermal Neutrons from Lunar Prospector: Evidence for Water Ice at the Lunar Poles". Science. 281 (5382): 1496–1500. Bibcode:1998Sci...281.1496F. doi:10.1126/science.281.5382.1496. PMID 9727973.
- Saal, Alberto E.; Hauri, Erik H.; Cascio, Mauro L.; van Orman, James A.; Rutherford, Malcolm C.; Cooper, Reid F. (2008). "Volatile content of lunar volcanic glasses and the presence of water in the Moon's interior". Nature. 454 (7201): 192–195. Bibcode:2008Natur.454..192S. doi:10.1038/nature07047. PMID 18615079.
- Pieters, C.M.; Goswami, J.N.; Clark, R.N.; Annadurai, M.; Boardman, J.; Buratti, B.; Combe, J.-P.; Dyar, M.D.; Green, R.; Head, J.W.; Hibbitts, C.; Hicks, M.; Isaacson, P.; Klima, R.; Kramer, G.; Kumar, S.; Livo, E.; Lundeen, S.; Malaret, E.; McCord, T.; Mustard, J.; Nettles, J.; Petro, N.; Runyon, C.; Staid, M.; Sunshine, J.; Taylor, L.A.; Tompkins, S.; Varanasi, P. (2009). "Character and Spatial Distribution of OH/H2O on the Surface of the Moon Seen by M3 on Chandrayaan-1". Science. 326 (5952): 568–572. Bibcode:2009Sci...326..568P. doi:10.1126/science.1178658. PMID 19779151.
- Li, Shuai; Lucey, Paul G.; Milliken, Ralph E.; Hayne, Paul O.; Fisher, Elizabeth; Williams, Jean-Pierre; Hurley, Dana M.; Elphic, Richard C. (August 2018). "Direct evidence of surface exposed water ice in the lunar polar regions". Proceedings of the National Academy of Sciences. 115 (36): 8907–8912. doi:10.1073/pnas.1802345115. PMC 6130389. PMID 30126996.
- Lakdawalla, Emily (13 November 2009). "LCROSS Lunar Impactor Mission: "Yes, We Found Water!"". The Planetary Society. Archived from the original on 22 January 2010. Retrieved 13 April 2010.
- Colaprete, A.; Ennico, K.; Wooden, D.; Shirley, M.; Heldmann, J.; Marshall, W.; Sollitt, L.; Asphaug, E.; Korycansky, D.; Schultz, P.; Hermalyn, B.; Galal, K.; Bart, G.D.; Goldstein, D.; Summy, D. (1–5 March 2010). "Water and More: An Overview of LCROSS Impact Results". 41st Lunar and Planetary Science Conference. 41 (1533): 2335. Bibcode:2010LPI....41.2335C.
- Colaprete, Anthony; Schultz, Peter; Heldmann, Jennifer; Wooden, Diane; Shirley, Mark; Ennico, Kimberly; Hermalyn, Brendan; Marshall, William; Ricco, Antonio; Elphic, Richard C.; Goldstein, David; Summy, Dustin; Bart, Gwendolyn D.; Asphaug, Erik; Korycansky, Don; Landis, David; Sollitt, Luke (22 October 2010). "Detection of Water in the LCROSS Ejecta Plume". Science. 330 (6003): 463–468. Bibcode:2010Sci...330..463C. doi:10.1126/science.1186986. PMID 20966242.
- Hauri, Erik; Thomas Weinreich; Albert E. Saal; Malcolm C. Rutherford; James A. Van Orman (26 May 2011). "High Pre-Eruptive Water Contents Preserved in Lunar Melt Inclusions". Science Express. 10 (1126): 213–215. Bibcode:2011Sci...333..213H. doi:10.1126/science.1204626. PMID 21617039.
- Rincon, Paul (21 August 2018). "Water ice 'detected on Moon's surface'". BBC News. Retrieved 21 August 2018.
- David, Leonard. "Beyond the Shadow of a Doubt, Water Ice Exists on the Moon". Scientific American. Retrieved 21 August 2018.
- "Water Ice Confirmed on the Surface of the Moon for the 1st Time!". Space.com. Retrieved 21 August 2018.
- Muller, P.; Sjogren, W. (1968). "Mascons: lunar mass concentrations". Science. 161 (3842): 680–684. Bibcode:1968Sci...161..680M. doi:10.1126/science.161.3842.680. PMID 17801458.
- Richard A. Kerr (12 April 2013). "The Mystery of Our Moon's Gravitational Bumps Solved?". Science. 340 (6129): 138–139. doi:10.1126/science.340.6129.138-a. PMID 23580504.
- Konopliv, A.; Asmar, S.; Carranza, E.; Sjogren, W.; Yuan, D. (2001). "Recent gravity models as a result of the Lunar Prospector mission" (PDF). Icarus. 50 (1): 1–18. Bibcode:2001Icar..150....1K. CiteSeerX 10.1.1.18.1930. doi:10.1006/icar.2000.6573. Archived from the original (PDF) on 13 November 2004.
- Garrick-Bethell, Ian; Weiss, iBenjamin P.; Shuster, David L.; Buz, Jennifer (2009). "Early Lunar Magnetism". Science. 323 (5912): 356–359. Bibcode:2009Sci...323..356G. doi:10.1126/science.1166804. PMID 19150839.
- "Magnetometer / Electron Reflectometer Results". Lunar Prospector (NASA). 2001. Archived from the original on 27 May 2010. Retrieved 17 March 2010.
- Hood, L.L.; Huang, Z. (1991). "Formation of magnetic anomalies antipodal to lunar impact basins: Two-dimensional model calculations". Journal of Geophysical Research. 96 (B6): 9837–9846. Bibcode:1991JGR....96.9837H. doi:10.1029/91JB00308.
- "Moon Storms". NASA. 27 September 2013. Archived from the original on 12 September 2013. Retrieved 3 October 2013.
- Culler, Jessica (16 June 2015). "LADEE - Lunar Atmosphere Dust and Environment Explorer". Archived from the original on 8 April 2015.
- Globus, Ruth (1977). "Chapter 5, Appendix J: Impact Upon Lunar Atmosphere". In Richard D. Johnson & Charles Holbrow (ed.). Space Settlements: A Design Study. NASA. Archived from the original on 31 May 2010. Retrieved 17 March 2010.
- Crotts, Arlin P.S. (2008). "Lunar Outgassing, Transient Phenomena and The Return to The Moon, I: Existing Data" (PDF). The Astrophysical Journal. 687 (1): 692–705. arXiv:0706.3949. Bibcode:2008ApJ...687..692C. doi:10.1086/591634. Archived (PDF) from the original on 20 February 2009.
- Steigerwald, William (17 August 2015). "NASA's LADEE Spacecraft Finds Neon in Lunar Atmosphere". NASA. Retrieved 18 August 2015.
- Stern, S.A. (1999). "The Lunar atmosphere: History, status, current problems, and context". Reviews of Geophysics. 37 (4): 453–491. Bibcode:1999RvGeo..37..453S. CiteSeerX 10.1.1.21.9994. doi:10.1029/1999RG900005.
- Lawson, S.; Feldman, W.; Lawrence, D.; Moore, K.; Elphic, R.; Belian, R. (2005). "Recent outgassing from the lunar surface: the Lunar Prospector alpha particle spectrometer". Journal of Geophysical Research. 110 (E9): 1029. Bibcode:2005JGRE..11009009L. doi:10.1029/2005JE002433.
- R. Sridharan; S.M. Ahmed; Tirtha Pratim Dasa; P. Sreelathaa; P. Pradeepkumara; Neha Naika; Gogulapati Supriya (2010). "'Direct' evidence for water (H2O) in the sunlit lunar ambience from CHACE on MIP of Chandrayaan I". Planetary and Space Science. 58 (6): 947–950. Bibcode:2010P&SS...58..947S. doi:10.1016/j.pss.2010.02.013.
- Drake, Nadia; 17, National Geographic PUBLISHED June (17 June 2015). "Lopsided Cloud of Dust Discovered Around the Moon". National Geographic News. Archived from the original on 19 June 2015. Retrieved 20 June 2015.
- Horányi, M.; Szalay, J.R.; Kempf, S.; Schmidt, J.; Grün, E.; Srama, R.; Sternovsky, Z. (18 June 2015). "A permanent, asymmetric dust cloud around the Moon". Nature. 522 (7556): 324–326. Bibcode:2015Natur.522..324H. doi:10.1038/nature14479. PMID 26085272.
- NASA: The Moon Once Had an Atmosphere That Faded Away | Time
- Hamilton, Calvin J.; Hamilton, Rosanna L., The Moon, Views of the Solar System Archived 4 February 2016 at the Wayback Machine, 1995–2011.
- Amos, Jonathan (16 December 2009). "'Coldest place' found on the Moon". BBC News. Retrieved 20 March 2010.
- "Diviner News". UCLA. 17 September 2009. Archived from the original on 7 March 2010. Retrieved 17 March 2010.
- Rocheleau, Jake (21 May 2012). "Temperature on the Moon – Surface Temperature of the Moon – PlanetFacts.org". Archived from the original on 27 May 2015.
- Haigh, I. D.; Eliot, M.; Pattiaratchi, C. (2011). "Global influences of the 18.61 year nodal cycle and 8.85 year cycle of lunar perigee on high tidal levels" (PDF). J. Geophys. Res. 116 (C6): C06025. Bibcode:2011JGRC..116.6025H. doi:10.1029/2010JC006645.CS1 maint: uses authors parameter (link)
- V V Belet︠s︡kiĭ (2001). Essays on the Motion of Celestial Bodies. Birkhäuser. p. 183. ISBN 978-3-7643-5866-2.
- "Space Topics: Pluto and Charon". The Planetary Society. Archived from the original on 18 February 2012. Retrieved 6 April 2010.
- Phil Plait. "Dark Side of the Moon". Bad Astronomy: Misconceptions. Archived from the original on 12 April 2010. Retrieved 15 February 2010.
- Alexander, M.E. (1973). "The Weak Friction Approximation and Tidal Evolution in Close Binary Systems". Astrophysics and Space Science. 23 (2): 459–508. Bibcode:1973Ap&SS..23..459A. doi:10.1007/BF00645172.
- "Moon used to spin 'on different axis'". BBC News. BBC. 23 March 2016. Archived from the original on 23 March 2016. Retrieved 23 March 2016.
- Luciuk, Mike. "How Bright is the Moon?". Amateur Astronomers. Archived from the original on 12 March 2010. Retrieved 16 March 2010.
- Hershenson, Maurice (1989). The Moon illusion. Routledge. p. 5. ISBN 978-0-8058-0121-7.
- Spekkens, K. (18 October 2002). "Is the Moon seen as a crescent (and not a "boat") all over the world?". Curious About Astronomy. Archived from the original on 16 October 2015. Retrieved 28 September 2015.
- "Moonlight helps plankton escape predators during Arctic winters". New Scientist. 16 January 2016. Archived from the original on 30 January 2016.
- ""Super Moon" exceptional. Brightest moon in the sky of Normandy, Monday, November 14 - The Siver Times". 12 November 2016. Archived from the original on 14 November 2016.
- "Moongazers Delight – Biggest Supermoon in Decades Looms Large Sunday Night". 10 November 2016. Archived from the original on 14 November 2016.
- "Supermoon November 2016". Space.com. 13 November 2016. Archived from the original on 14 November 2016. Retrieved 14 November 2016.
- Tony Phillips (16 March 2011). "Super Full Moon". NASA. Archived from the original on 7 May 2012. Retrieved 19 March 2011.
- Richard K. De Atley (18 March 2011). "Full moon tonight is as close as it gets". The Press-Enterprise. Archived from the original on 22 March 2011. Retrieved 19 March 2011.
- "'Super moon' to reach closest point for almost 20 years". The Guardian. 19 March 2011. Archived from the original on 25 December 2013. Retrieved 19 March 2011.
- Georgia State University, Dept. of Physics (Astronomy). "Perceived Brightness". Brightnes and Night/Day Sensitivity. Georgia State University. Archived from the original on 21 February 2014. Retrieved 25 January 2014.
- Lutron. "Measured light vs. perceived light" (PDF). From IES Lighting Handbook 2000, 27-4. Lutron. Archived (PDF) from the original on 5 February 2013. Retrieved 25 January 2014.
Walker, John (May 1997). "Inconstant Moon". Earth and Moon Viewer. Fourth paragraph of "How Bright the Moonlight": Fourmilab. Archived from the original on 14 December 2013. Retrieved 23 January 2014.
14% [...] due to the logarithmic response of the human eye.
- Taylor, G.J. (8 November 2006). "Recent Gas Escape from the Moon". Planetary Science Research Discoveries: 110. Bibcode:2006psrd.reptE.110T. Archived from the original on 4 March 2007. Retrieved 4 April 2007.
- Schultz, P.H.; Staid, M.I.; Pieters, C.M. (2006). "Lunar activity from recent gas release". Nature. 444 (7116): 184–186. Bibcode:2006Natur.444..184S. doi:10.1038/nature05303. PMID 17093445.
- "22 Degree Halo: a ring of light 22 degrees from the sun or moon". Department of Atmospheric Sciences, University of Illinois at Urbana–Champaign. Retrieved 13 April 2010.
- Lambeck, K. (1977). "Tidal Dissipation in the Oceans: Astronomical, Geophysical and Oceanographic Consequences". Philosophical Transactions of the Royal Society A. 287 (1347): 545–594. Bibcode:1977RSPTA.287..545L. doi:10.1098/rsta.1977.0159.
- Le Provost, C.; Bennett, A.F.; Cartwright, D.E. (1995). "Ocean Tides for and from TOPEX/POSEIDON". Science. 267 (5198): 639–642. Bibcode:1995Sci...267..639L. doi:10.1126/science.267.5198.639. PMID 17745840.
- Touma, Jihad; Wisdom, Jack (1994). "Evolution of the Earth-Moon system". The Astronomical Journal. 108 (5): 1943–1961. Bibcode:1994AJ....108.1943T. doi:10.1086/117209.
- Chapront, J.; Chapront-Touzé, M.; Francou, G. (2002). "A new determination of lunar orbital parameters, precession constant and tidal acceleration from LLR measurements". Astronomy and Astrophysics. 387 (2): 700–709. Bibcode:2002A&A...387..700C. doi:10.1051/0004-6361:20020420.
- "Why the Moon is getting further away from Earth". BBC News. 1 February 2011. Archived from the original on 25 September 2015. Retrieved 18 September 2015.
- Ray, R. (15 May 2001). "Ocean Tides and the Earth's Rotation". IERS Special Bureau for Tides. Archived from the original on 27 March 2010. Retrieved 17 March 2010.
- Murray, C.D.; Dermott, Stanley F. (1999). Solar System Dynamics. Cambridge University Press. p. 184. ISBN 978-0-521-57295-8.
- Dickinson, Terence (1993). From the Big Bang to Planet X. Camden East, Ontario: Camden House. pp. 79–81. ISBN 978-0-921820-71-0.
- Latham, Gary; Ewing, Maurice; Dorman, James; Lammlein, David; Press, Frank; Toksőz, Naft; Sutton, George; Duennebier, Fred; Nakamura, Yosio (1972). "Moonquakes and lunar tectonism". Earth, Moon, and Planets. 4 (3–4): 373–382. Bibcode:1972Moon....4..373L. doi:10.1007/BF00562004.
- Phillips, Tony (12 March 2007). "Stereo Eclipse". Science@NASA. Archived from the original on 10 June 2008. Retrieved 17 March 2010.
- Espenak, F. (2000). "Solar Eclipses for Beginners". MrEclip]]. Retrieved 17 March 2010.
- Walker, John (10 July 2004). "Moon near Perigee, Earth near Aphelion". Fourmilab. Archived from the original on 8 December 2013. Retrieved 25 December 2013.
- Thieman, J.; Keating, S. (2 May 2006). "Eclipse 99, Frequently Asked Questions". NASA. Archived from the original on 11 February 2007. Retrieved 12 April 2007.
- Espenak, F. "Saros Cycle". NASA. Archived from the original on 24 May 2012. Retrieved 17 March 2010.
- Guthrie, D.V. (1947). "The Square Degree as a Unit of Celestial Area". Popular Astronomy. Vol. 55. pp. 200–203. Bibcode:1947PA.....55..200G.
- "Total Lunar Occultations". Royal Astronomical Society of New Zealand. Archived from the original on 23 February 2010. Retrieved 17 March 2010.
- "Lunar maps". Retrieved 18 September 2019.
- "Carved and Drawn Prehistoric Maps of the Cosmos". Space Today. 2006. Archived from the original on 5 March 2012. Retrieved 12 April 2007.
- Aaboe, A.; Britton, J.P.; Henderson, J.A.; Neugebauer, Otto; Sachs, A.J. (1991). "Saros Cycle Dates and Related Babylonian Astronomical Texts". Transactions of the American Philosophical Society. 81 (6): 1–75. doi:10.2307/1006543. JSTOR 1006543.
One comprises what we have called "Saros Cycle Texts", which give the months of eclipse possibilities arranged in consistent cycles of 223 months (or 18 years).
- Sarma, K.V. (2008). "Astronomy in India". In Helaine Selin (ed.). Encyclopaedia of the History of Science, Technology, and Medicine in Non-Western Cultures. Encyclopaedia of the History of Science (2 ed.). Springer. pp. 317–321. Bibcode:2008ehst.book.....S. ISBN 978-1-4020-4559-2.
- Needham 1986, p. 411.
- O'Connor, J.J.; Robertson, E.F. (February 1999). "Anaxagoras of Clazomenae". University of St Andrews. Archived from the original on 12 January 2012. Retrieved 12 April 2007.
- Needham 1986, p. 227.
- Needham 1986, p. 413–414.
- Robertson, E.F. (November 2000). "Aryabhata the Elder". Scotland: School of Mathematics and Statistics, University of St Andrews. Archived from the original on 11 July 2015. Retrieved 15 April 2010.
- A.I. Sabra (2008). "Ibn Al-Haytham, Abū ʿAlī Al-Ḥasan Ibn Al-Ḥasan". Dictionary of Scientific Biography. Detroit: Charles Scribner's Sons. pp. 189–210, at 195.
- Needham 1986, p. 415–416.
- Lewis, C.S. (1964). The Discarded Image. Cambridge: Cambridge University Press. p. 108. ISBN 978-0-521-47735-2.
- van der Waerden, Bartel Leendert (1987). "The Heliocentric System in Greek, Persian and Hindu Astronomy". Annals of the New York Academy of Sciences. 500 (1): 1–569. Bibcode:1987NYASA.500....1A. doi:10.1111/j.1749-6632.1987.tb37193.x. PMID 3296915.
- Evans, James (1998). The History and Practice of Ancient Astronomy. Oxford & New York: Oxford University Press. pp. 71, 386. ISBN 978-0-19-509539-5.
- "Discovering How Greeks Computed in 100 B.C." The New York Times. 31 July 2008. Archived from the original on 4 December 2013. Retrieved 9 March 2014.
- Van Helden, A. (1995). "The Moon". Galileo Project. Archived from the original on 23 June 2004. Retrieved 12 April 2007.
- Consolmagno, Guy J. (1996). "Astronomy, Science Fiction and Popular Culture: 1277 to 2001 (And beyond)". Leonardo. 29 (2): 127–132. doi:10.2307/1576348. JSTOR 1576348.
- Hall, R. Cargill (1977). "Appendix A: Lunar Theory Before 1964". NASA History Series. Lunar Impact: A History of Project Ranger. Washington, DC: Scientific and Technical Information Office, NASA. Archived from the original on 10 April 2010. Retrieved 13 April 2010.
- Zak, Anatoly (2009). "Russia's unmanned missions toward the Moon". Archived from the original on 14 April 2010. Retrieved 20 April 2010.
- "Rocks and Soils from the Moon". NASA. Archived from the original on 27 May 2010. Retrieved 6 April 2010.
- "Soldiers, Spies and the Moon: Secret U.S. and Soviet Plans from the 1950s and 1960s". The National Security Archive. National Security Archive. Archived from the original on 19 December 2016. Retrieved 1 May 2017.
- Brumfield, Ben (25 July 2014). "U.S. reveals secret plans for '60s moon base". CNN. Archived from the original on 27 July 2014. Retrieved 26 July 2014.
- Teitel, Amy (11 November 2013). "LUNEX: Another way to the Moon". Popular Science. Archived from the original on 16 October 2015.
- Logsdon, John (2010). John F. Kennedy and the Race to the Moon. Palgrave Macmillan. ISBN 978-0-230-11010-6.
- Coren, M. (26 July 2004). "'Giant leap' opens world of possibility". CNN. Archived from the original on 20 January 2012. Retrieved 16 March 2010.
- "Record of Lunar Events, 24 July 1969". Apollo 11 30th anniversary. NASA. Archived from the original on 8 April 2010. Retrieved 13 April 2010.
- "Manned Space Chronology: Apollo_11". Spaceline.org. Archived from the original on 14 February 2008. Retrieved 6 February 2008.
- "Apollo Anniversary: Moon Landing "Inspired World"". National Geographic. Archived from the original on 9 February 2008. Retrieved 6 February 2008.
- Orloff, Richard W. (September 2004) [First published 2000]. "Extravehicular Activity". Apollo by the Numbers: A Statistical Reference. NASA History Division, Office of Policy and Plans. The NASA History Series. Washington, DC: NASA. ISBN 978-0-16-050631-4. LCCN 00061677. NASA SP-2000-4029. Archived from the original on 6 June 2013. Retrieved 1 August 2013.
- Launius, Roger D. (July 1999). "The Legacy of Project Apollo". NASA History Office]]. Archived from the original on 8 April 2010. Retrieved 13 April 2010.
- SP-287 What Made Apollo a Success? A series of eight articles reprinted by permission from the March 1970 issue of Astronautics & Aeronautics, a publication of the American Institute of Aeronautics and Astronautics. Washington, DC: Scientific and Technical Information Office, National Aeronautics and Space Administration. 1971.
- "NASA news release 77-47 page 242" (PDF) (Press release). 1 September 1977. Archived (PDF) from the original on 4 June 2011. Retrieved 16 March 2010.
- Appleton, James; Radley, Charles; Deans, John; Harvey, Simon; Burt, Paul; Haxell, Michael; Adams, Roy; Spooner N.; Brieske, Wayne (1977). "NASA Turns A Deaf Ear To The Moon". OASI Newsletters Archive. Archived from the original on 10 December 2007. Retrieved 29 August 2007.
- Dickey, J.; et al. (1994). "Lunar laser ranging: a continuing legacy of the Apollo program". Science. 265 (5171): 482–490. Bibcode:1994Sci...265..482D. doi:10.1126/science.265.5171.482. PMID 17781305.
- "Hiten-Hagomoro". NASA. Archived from the original on 14 June 2011. Retrieved 29 March 2010.
- "Clementine information". NASA. 1994. Archived from the original on 25 September 2010. Retrieved 29 March 2010.
- "Lunar Prospector: Neutron Spectrometer". NASA. 2001. Archived from the original on 27 May 2010. Retrieved 29 March 2010.
- "SMART-1 factsheet". [¹[European Space Agency]]. 26 February 2007. Archived from the original on 23 March 2010. Retrieved 29 March 2010.
- "China's first lunar probe ends mission". Xinhua. 1 March 2009. Archived from the original on 4 March 2009. Retrieved 29 March 2010.
- Leonard David (17 March 2015). "China Outlines New Rockets, Space Station and Moon Plans". Space.com. Archived from the original on 1 July 2016. Retrieved 29 June 2016.
- "KAGUYA Mission Profile". JAXA. Archived from the original on 28 March 2010. Retrieved 13 April 2010.
- "KAGUYA (SELENE) World's First Image Taking of the Moon by HDTV". Japan Aerospace Exploration Agency (JAXA) and Japan Broadcasting Corporation (NHK). 7 November 2007. Archived from the original on 16 March 2010. Retrieved 13 April 2010.
- "Mission Sequence". Indian Space Research Organisation. 17 November 2008. Archived from the original on 6 July 2010. Retrieved 13 April 2010.
- "Indian Space Research Organisation: Future Program". Indian Space Research Organisation. Archived from the original on 25 November 2010. Retrieved 13 April 2010.
- "India and Russia Sign an Agreement on Chandrayaan-2". Indian Space Research Organisation. 14 November 2007. Archived from the original on 17 December 2007. Retrieved 13 April 2010.
- "Lunar CRater Observation and Sensing Satellite (LCROSS): Strategy & Astronomer Observation Campaign". NASA. October 2009. Archived from the original on 1 January 2012. Retrieved 13 April 2010.
- "Giant moon crater revealed in spectacular up-close photos". NBC News. Space.com. 6 January 2012.
- Chang, Alicia (26 December 2011). "Twin probes to circle moon to study gravity field". Phys.org. Associated Press. Retrieved 22 July 2018.
- Covault, C. (4 June 2006). "Russia Plans Ambitious Robotic Lunar Mission". Aviation Week. Archived from the original on 12 June 2006. Retrieved 12 April 2007.
- "Russia to send mission to Mars this year, Moon in three years". TV-Novosti. 25 February 2009. Archived from the original on 13 September 2010. Retrieved 13 April 2010.
- "About the Google Lunar X Prize". X-Prize Foundation. 2010. Archived from the original on 28 February 2010. Retrieved 24 March 2010.
- Wall, Mike (14 January 2011). "Mining the Moon's Water: Q&A with Shackleton Energy's Bill Stone". Space News.
- "President Bush Offers New Vision For NASA" (Press release). NASA. 14 December 2004. Archived from the original on 10 May 2007. Retrieved 12 April 2007.
- "Constellation". NASA. Archived from the original on 12 April 2010. Retrieved 13 April 2010.
- "NASA Unveils Global Exploration Strategy and Lunar Architecture" (Press release). NASA. 4 December 2006. Archived from the original on 23 August 2007. Retrieved 12 April 2007.
- NASAtelevision (15 April 2010). "President Obama Pledges Total Commitment to NASA". YouTube. Archived from the original on 28 April 2012. Retrieved 7 May 2012.
- "India's Space Agency Proposes Manned Spaceflight Program". Space.com. 10 November 2006. Archived from the original on 11 April 2012. Retrieved 23 October 2008.
- SpaceX to help Vodafone and Nokia install first 4G signal on the Moon | The Week UK
- "NASA plans to send first woman on Moon by 2024". The Asian Age. 15 May 2019. Retrieved 15 May 2019.
- Chang, Kenneth (24 January 2017). "For 5 Contest Finalists, a $20 Million Dash to the Moon". The New York Times. ISSN 0362-4331. Archived from the original on 15 July 2017. Retrieved 13 July 2017.
- Mike Wall (16 August 2017), "Deadline for Google Lunar X Prize Moon Race Extended Through March 2018", space.com, retrieved 25 September 2017
- McCarthy, Ciara (3 August 2016). "US startup Moon Express approved to make 2017 lunar mission". The Guardian. ISSN 0261-3077. Archived from the original on 30 July 2017. Retrieved 13 July 2017.
- "An Important Update From Google Lunar XPRIZE". Google Lunar XPRIZE. 23 January 2018. Retrieved 12 May 2018.
- "Moon Express Approved for Private Lunar Landing in 2017, a Space First". Space.com. Archived from the original on 12 July 2017. Retrieved 13 July 2017.
- Chang, Kenneth (29 November 2018). "NASA's Return to the Moon to Start With Private Companies' Spacecraft". The New York Times. The New York Times Company. Retrieved 29 November 2018.
- "NASA - Ultraviolet Waves". Science.hq.nasa.gov. 27 September 2013. Archived from the original on 17 October 2013. Retrieved 3 October 2013.
- Takahashi, Yuki (September 1999). "Mission Design for Setting up an Optical Telescope on the Moon". California Institute of Technology. Archived from the original on 6 November 2015. Retrieved 27 March 2011.
- Chandler, David (15 February 2008). "MIT to lead development of new telescopes on moon". MIT News. Archived from the original on 4 March 2009. Retrieved 27 March 2011.
- Naeye, Robert (6 April 2008). "NASA Scientists Pioneer Method for Making Giant Lunar Telescopes". Goddard Space Flight Center. Archived from the original on 22 December 2010. Retrieved 27 March 2011.
- Bell, Trudy (9 October 2008). "Liquid Mirror Telescopes on the Moon". Science News. NASA. Archived from the original on 23 March 2011. Retrieved 27 March 2011.
- "Far Ultraviolet Camera/Spectrograph". Lpi.usra.edu. Archived from the original on 3 December 2013. Retrieved 3 October 2013.
- "Can any State claim a part of outer space as its own?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010.
- "How many States have signed and ratified the five international treaties governing outer space?". United Nations Office for Outer Space Affairs. 1 January 2006. Archived from the original on 21 April 2010. Retrieved 28 March 2010.
- "Do the five international treaties regulate military activities in outer space?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010.
- "Agreement Governing the Activities of States on the Moon and Other Celestial Bodies". United Nations Office for Outer Space Affairs. Archived from the original on 9 August 2010. Retrieved 28 March 2010.
- "The treaties control space-related activities of States. What about non-governmental entities active in outer space, like companies and even individuals?". United Nations Office for Outer Space Affairs. Archived from the original on 21 April 2010. Retrieved 28 March 2010.
- "Statement by the Board of Directors of the IISL On Claims to Property Rights Regarding The Moon and Other Celestial Bodies (2004)" (PDF). International Institute of Space Law. 2004. Archived (PDF) from the original on 22 December 2009. Retrieved 28 March 2010.
- "Further Statement by the Board of Directors of the IISL On Claims to Lunar Property Rights (2009)" (PDF). International Institute of Space Law. 22 March 2009. Archived (PDF) from the original on 22 December 2009. Retrieved 28 March 2010.
- Dexter, Miriam Robbins (1984). "Proto-Indo-European Sun Maidens and Gods of the Moon". Mankind Quarterly. 25 (1 & 2): 137–144.
- Nemet-Nejat, Karen Rhea (1998), Daily Life in Ancient Mesopotamia, Daily Life, Greenwood, p. 203, ISBN 978-0-313-29497-6
- Black, Jeremy; Green, Anthony (1992). Gods, Demons and Symbols of Ancient Mesopotamia: An Illustrated Dictionary. The British Museum Press. p. 135. ISBN 978-0-7141-1705-8.
- Zschietzschmann, W. (2006). Hellas and Rome: The Classical World in Pictures. Whitefish, Montana: Kessinger Publishing. p. 23. ISBN 978-1-4286-5544-7.
- Cohen, Beth (2006). "Outline as a Special Technique in Black- and Red-figure Vase-painting". The Colors of Clay: Special Techniques in Athenian Vases. Los Angeles: Getty Publications. pp. 178–179. ISBN 978-0-89236-942-3.
- "Muhammad." Encyclopædia Britannica. 2007. Encyclopædia Britannica Online, p.13
- Marshack, Alexander (1991), The Roots of Civilization, Colonial Hill, Mount Kisco, NY.
- Brooks, A.S. and Smith, C.C. (1987): "Ishango revisited: new age determinations and cultural interpretations", The African Archaeological Review, 5 : 65–78.
- Duncan, David Ewing (1998). The Calendar. Fourth Estate Ltd. pp. 10–11. ISBN 978-1-85702-721-1.
- For etymology, see Barnhart, Robert K. (1995). The Barnhart Concise Dictionary of Etymology. Harper Collins. p. 487. ISBN 978-0-06-270084-1.. For the lunar calendar of the Germanic peoples, see Birley, A. R. (Trans.) (1999). Agricola and Germany. Oxford World's Classics. US: Oxford University Press. p. 108. ISBN 978-0-19-283300-6.
- Mallory, J.P.; Adams, D.Q. (2006). The Oxford Introduction to Proto-Indo-European and the Proto-Indo-European World. Oxford Linguistics. Oxford University Press. pp. 98, 128, 317. ISBN 978-0-19-928791-8.
- Harper, Douglas. "measure". Online Etymology Dictionary.
- Harper, Douglas. "menstrual". Online Etymology Dictionary.
- Smith, William George (1849). Dictionary of Greek and Roman Biography and Mythology: Oarses-Zygia. 3. J. Walton. p. 768. Retrieved 29 March 2010.
- Estienne, Henri (1846). Thesaurus graecae linguae. 5. Didot. p. 1001. Retrieved 29 March 2010.
- mensis. Charlton T. Lewis and Charles Short. A Latin Dictionary on Perseus Project.
- μείς in Liddell and Scott.
- "Islamic Calendars based on the Calculated First Visibility of the Lunar Crescent". University of Utrecht. Archived from the original on 11 January 2014. Retrieved 11 January 2014.
- Lilienfeld, Scott O.; Arkowitz, Hal (2009). "Lunacy and the Full Moon". Scientific American. Archived from the original on 16 October 2009. Retrieved 13 April 2010.
- Rotton, James; Kelly, I.W. (1985). "Much ado about the full moon: A meta-analysis of lunar-lunacy research". Psychological Bulletin. 97 (2): 286–306. doi:10.1037/0033-2909.97.2.286.
- Martens, R.; Kelly, I.W.; Saklofske, D.H. (1988). "Lunar Phase and Birthrate: A 50-year Critical Review". Psychological Reports. 63 (3): 923–934. doi:10.2466/pr0.19184.108.40.2063. PMID 3070616.
- Kelly, Ivan; Rotton, James; Culver, Roger (1986), "The Moon Was Full and Nothing Happened: A Review of Studies on the Moon and Human Behavior", Skeptical Inquirer, 10 (2): 129–143. Reprinted in The Hundredth Monkey - and other paradigms of the paranormal, edited by Kendrick Frazier, Prometheus Books. Revised and updated in The Outer Edge: Classic Investigations of the Paranormal, edited by Joe Nickell, Barry Karr, and Tom Genoni, 1996, CSICOP.
- Foster, Russell G.; Roenneberg, Till (2008). "Human Responses to the Geophysical Daily, Annual and Lunar Cycles". Current Biology. 18 (17): R784–R794. Bibcode:1996CBio....6.1213A. doi:10.1016/j.cub.2008.07.003. PMID 18786384.
- Needham, Joseph (1986). Science and Civilization in China, Volume III: Mathematics and the Sciences of the Heavens and Earth. Taipei: Caves Books. ISBN 978-0-521-05801-8.
- Terry, paul (2013), Top 10 Of Everything, Octopus Publishing Group Ltd 2013, ISBN 978-0-600-62887-3
- "Revisiting the Moon". The New York Times. Retrieved 8 September 2014.
- The Moon. Discovery 2008. BBC World Service.
- Bussey, B.; Spudis, P.D. (2004). The Clementine Atlas of the Moon. Cambridge University Press. ISBN 978-0-521-81528-4.
- Cain, Fraser. "Where does the Moon Come From?". Universe Today. Retrieved 1 April 2008. (podcast and transcript)
- Jolliff, B. (2006). Wieczorek, M.; Shearer, C.; Neal, C. (eds.). New views of the Moon. Reviews in Mineralogy and Geochemistry. 60. Chantilly, Virginia: Mineralogy Society of America. p. 721. Bibcode:2006RvMG...60D...5J. doi:10.2138/rmg.2006.60.0. ISBN 978-0-939950-72-0. Retrieved 12 April 2007.
- Jones, E.M. (2006). "Apollo Lunar Surface Journal". NASA. Retrieved 12 April 2007.
- "Exploring the Moon". Lunar and Planetary Institute. Retrieved 12 April 2007.
- Mackenzie, Dana (2003). The Big Splat, or How Our Moon Came to Be. Hoboken, NJ: John Wiley & Sons. ISBN 978-0-471-15057-2.
- Moore, P. (2001). On the Moon. Tucson, Arizona: Sterling Publishing Co. ISBN 978-0-304-35469-6.
- "Moon Articles". Planetary Science Research Discoveries. Hawai'i Institute of Geophysics and Planetology.
- Spudis, P.D. (1996). The Once and Future Moon. Smithsonian Institution Press. ISBN 978-1-56098-634-8.
- Taylor, S.R. (1992). Solar system evolution. Cambridge University Press. p. 307. ISBN 978-0-521-37212-1.
- Teague, K. (2006). "The Project Apollo Archive". Retrieved 12 April 2007.
- Wilhelms, D.E. (1987). "Geologic History of the Moon". U.S. Geological Survey Professional Paper. 1348. Retrieved 12 April 2007.
- Wilhelms, D.E. (1993). To a Rocky Moon: A Geologist's History of Lunar Exploration. Tucson: University of Arizona Press. ISBN 978-0-8165-1065-8. Retrieved 10 March 2009.
- NASA images and videos about the Moon
- Albums of images and high-resolution overflight videos by Seán Doran, based on LROC data, on Flickr and YouTube
- Video (04:56) – The Moon in 4K (NASA, April 2018) on YouTube
- Video (04:47) – The Moon in 3D (NASA, July 2018) on YouTube
- Moon Trek – An integrated map browser of datasets and maps for the Moon
- The Moon on Google Maps, a 3-D rendition of the Moon akin to Google Earth
- "Consolidated Lunar Atlas". Lunar and Planetary Institute. Retrieved 26 February 2012.
- Gazetteer of Planetary Nomenclature (USGS) List of feature names.
- "Clementine Lunar Image Browser". U.S. Navy. 15 October 2003. Retrieved 12 April 2007.
- 3D zoomable globes:
- Aeschliman, R. "Lunar Maps". Planetary Cartography and Graphics. Retrieved 12 April 2007. Maps and panoramas at Apollo landing sites
- Japan Aerospace Exploration Agency (JAXA) Kaguya (Selene) images
- Large image of the Moon's north pole area
- "NASA's SKYCAL – Sky Events Calendar". NASA. Archived from the original on 20 August 2007. Retrieved 27 August 2007.
- "Find moonrise, moonset and moonphase for a location". 2008. Retrieved 18 February 2008.
- "HMNAO's Moon Watch". 2005. Retrieved 24 May 2009. See when the next new crescent moon is visible for any location.
|
2 9.6 Review § Any QUESTIONS About Any QUESTIONS About HomeWork MTH 55Review §Any QUESTIONS About§9.6 → Exponential Decay & GrowthAny QUESTIONS About HomeWork§9.6 → HW-48
3 The Distance FormulaThe distance between the points (x1, y1) and (x2, y1) on a horizontal line is |x2 – x1|.Similarly, the distance between the points (x2, y1) and (x2, y2) on a vertical line is |y2 – y1|.
4 Pythagorean DistanceNow consider any two points (x1, y1) and (x2, y2).These points, along with (x2, y1), describe a right triangle. The lengths of the legs are |x2 – x1| and |y2 – y1|.
5 Pythagorean DistanceFind d, the length of the hypotenuse, by using the Pythagorean theorem:d2 = |x2 – x1|2 + |y2 – y1|2Since the square of a number is the same as the square of its opposite, we can replace the absolute-value signs with parentheses:d2 = (x2 – x1)2 + (y2 – y1)2
6 Distance Formula Formally The distance d between any two points (x1, y1) and (x2, y2) is given by
7 Example Find Distance Find the distance between (3, 1) and (5, −6). Find an exact answer and an approximation to three decimal places.Solution: Substitute into the distance formulaSubstitutingThis is exact.Approximation
8 Example Verify Rt TriAngle Let A(4, 3), B(1, 4) and C(−2, −4) be three points in the plane. Connect these Dots to form a Triangle, Then:Sketch the triangle ABCFind the length of each side of the triangleShow that ABC is a right triangle.
9 Example Verify Rt TriAngle Soln a. Sketch TriAngle
10 Example Verify Rt TriAngle Soln b. Find the length of each side of the triangle → Use Distance Formula
11 Example Verify Rt TriAngle Soln c.: Show that ABC is a Rt triangle.Check that a2 + b2 = c2 holds in this triangle, where a, b, and c denote the lengths of its sides. The longest side, AC, has length 10 units.It follows from the converse of the Pythagorean Theorem that the triangle ABC IS a right triangle.
12 Example BaseBall Distance The baseball “diamond” is in fact a square with a distance of 90 feet between each of the consecutive bases. Use an appropriate coordinate system to calculate the distance the ball will travel when the third baseman throws it from third base to first base.
13 Example BaseBall Distance Solution: conveniently choose home plate as the origin and place the x-axis along the line from home plate to first base and the y-axis along the line from home plate to third base
14 Example BaseBall Distance Find from the DiagramThe coordinates of home plate (O), first base (A) second base (C) and third base (B)
15 Example BaseBall Distance Find the distance between points A & B127.3 ft
16 The MidPoint FormulaNow that we have derived the Distance formula from the Pythagorean Theorem we use the distance formula to develop a formula for the coordinates of the MidPoint of a segment connecting two points.
17 The MidPoint FormulaIf the endpoints of a segment are (x1, y1) and (x2, y2), then the coordinates of the midpoint arey(x2, y2)(x1, y1)xThat is, to locate the midpoint, average the x-coordinates and average the y-coordinates
18 Example MidPoint Formula Find the midpoint of the line segment joining the points P(−3, 6) and Q(1, 4)Solution: (x1, y1) = (−3, 6) & (x2, y2) = (1, 4)
19 CIRCLE DefinedA circle is a set of points in a Cartesian coordinate plane that are at a fixed distance r from a specified point (h, k).The fixed distance r is called the radius of the circle, andThe specified point (h, k) is called the center of the circle.
20 CIRCLE GraphedThe graph of a circle with center (h, k) and radius r.
21 CIRCLE - EquationThe equation of a circle with center (h, k) and radius r isThis equation is also called the standard form of an equation of a circle with radius r and center (h, k).
22 Example Find Circle Eqn Find the center-radius form of the equation of the circle with center (−3, 4) and radius 7.Solution:
23 Example Graph Circle Graph each equation Solution: Center: (0, 0) Radius: 1Called the unit circle
24 Example Graph CircleSolution:Center: (−2, 3)Radius: 5
25 Equation ↔ Circle Note that stating that the equation: represents the circle of radius 5 with center (–3, 4) means two things:If the values of x and y are a pair of numbers that satisfy the equation, then they are the coordinates of a point on the circle with radius 5 and center (–3, 4).If a point is on the circle, then its coordinates satisfy the equation
26 Circle Eqn → General Form The general form of the equation of a circle is
27 Example General FormFind the center and radius of the circle with equation x2 +y2 − 6x + 8y +10 = 0Solution: COMPLETE the SQUARE for both x & yCenter: (3, – 4) Radius:
28 Example General FormFind the center & radius and then graph the circle x2 + y2 + 2x – 6y + 6 = 0Solution: Complete Square for both x & y to convert to Standard Formx2 + 2x + y2 – 6y = –6x2 + 2x y2 – 6y + 9 = –(x + 1)2 + (y – 3)2 = 4(x – (–1))2 + (y – 3)2 = 2 2
29 Example General Form Solution: Graph Sketch Graph ySolution: GraphCenter: (–1, 3)Radius: 2Sketch Graph(–1, 3)x(x – (–1))2 + (y – 3)2 = 2 2
30 WhiteBoard Work Problems From §10.1 Exercise Set Circle Eqns 16, 26, 38, 48, 54, 56Circle Eqns
31 Circle as Conic Section All Done for TodayCircle as Conic Section
|
History of the United States Democratic Party
This article is missing information about multiple presidents, as their sections are empty.(April 2017)
|Founded||January 8, 1828|
430 South Capitol St. SE,|
Washington, D.C., 20003
|Colors||Blue (after 2000)|
The Democratic Party is the oldest voter-based political party in the world and the oldest existing political party in the United States, tracing its heritage back to the anti-Federalists and the Jeffersonian Democratic-Republican Party of the 1790s. During the Second Party System (from 1832 to the mid-1850s) under Presidents Andrew Jackson, Martin Van Buren and James K. Polk, the Democrats usually bested the opposition Whig Party by narrow margins. Both parties worked hard to build grassroots organizations and maximize the turnout of voters, which often reached 80 percent or 90 percent. Both parties used patronage extensively to finance their operations, which included emerging big city political machines as well as national networks of newspapers. The Democratic Party was a proponent for slave-owners across the country, urban workers and caucasian immigrants.
From 1860 to 1932 in the era of the Civil War to the Great Depression, the opposing Republican Party, organized in the mid-1850s from the ruins of the Whig Party and some other smaller splinter groups, was dominant in presidential politics. The Democrats elected only two Presidents to four terms of office for 72 years: Grover Cleveland (in 1884 and 1892) and Woodrow Wilson (in 1912 and 1916). Over the same period, the Democrats proved more competitive with the Republicans in Congressional politics, enjoying House of Representatives majorities (as in the 65th Congress) in 15 of the 36 Congresses elected, although only in five of these did they form the majority in the Senate. The party was split between the Bourbon Democrats, representing Eastern business interests; and the agrarian elements comprising poor farmers in the South and West. The agrarian element, marching behind the slogan of "free silver" (i.e. in favor of inflation), captured the party in 1896 and nominated William Jennings Bryan in 1896, 1900 and 1908, though he lost every time. Both Bryan and Wilson were leaders of the Progressive Movement (1890s–1920s).
Starting with 32nd President Franklin D. Roosevelt in 1932 during the Great Depression, the party dominated the Fifth Party System, with its progressive liberal policies and programs with the New Deal coalition to combat the emergency bank closings and the continuing financial depression since the famous Wall Street Crash of 1929 and later going into the crises leading up to World War II. The Democrats and the Democratic Party finally lost the White House and control of the executive branch of government only after Roosevelt's death in April 1945 near the end of the war and after the continuing post-war administration of Roosevelt's third Vice President Harry S. Truman, former Senator from Missouri (for 1945 to 1953, elections of 1944 and the "stunner" of 1948). A new Republican Party President was only elected later in the following decade of the early 1950s with the losses by two-time nominee, the Governor of Illinois Adlai Stevenson (grandson of the former Vice President with the same name of the 1890s) to the very popular war hero and commanding general in World War II, General Dwight D. Eisenhower (in 1952 and 1956).
With two brief interruptions since the Great Depression and World War II eras, the Democrats with unusually large majorities for over four decades, controlled the lower house of the Congress in the House of Representatives from 1930 until 1994 and the Senate for most of that same period, electing the Speaker of the House and the Representatives' majority leaders/committee chairs along with the upper house of the Senate's majority leaders and committee chairmen. Important Democratic progressive/liberal leaders included 33rd and 36th Presidents Harry S. Truman of Missouri (1945–1953) and Lyndon B. Johnson of Texas (1963–1969), respectively; and the earlier Kennedy brothers of 35th President John F. Kennedy of Massachusetts (1961–1963), Senators Robert F. Kennedy of New York and Senator Ted Kennedy of Massachusetts who carried the flag for modern American liberalism. Since the presidential election of 1976, Democrats have won five out of the last eleven presidential elections, winning in the presidential elections of 1976 (with 39th President Jimmy Carter of Georgia, 1977–1981), 1992 and 1996 (with 42nd President Bill Clinton of Arkansas, 1993–2001) and 2008 and 2012 (with 44th President Barack Obama of Illinois, 2009–2017). Democrats have also won the popular vote in 2000 and 2016, but lost the Electoral College with Al Gore and Hillary Clinton, respectively. The 1876 and 1888 elections were other two presidential elections in which Democrats won the popular vote, but lost the Electoral College (the Democrats candidates were Samuel J. Tilden and Grover Cleveland). Social scientists Theodore Caplow et al. argue that "the Democratic party, nationally, moved from left-center toward the center in the 1940s and 1950s, then moved further toward the right-center in the 1970s and 1980s".
- 1 Presidency of John Quincy Adams (1825–1829)
- 2 Presidency of Andrew Jackson (1829–1837)
- 3 Presidency of Martin Van Buren (1837–1841)
- 4 Presidency of William H. Harrison & John Tyler (1841–1845)
- 5 Presidency of James K. Polk (1845–1849)
- 6 Presidency of Zachary Taylor (1849–1850)
- 7 Presidency of Millard Fillmore (1850–1853)
- 8 Presidency of Franklin Pierce (1853–1857)
- 9 Presidency of James Buchanan (1857–1861)
- 10 Presidency of Abraham Lincoln (1861–1865)
- 11 Presidency of Andrew Johnson (1865–1869)
- 12 Presidency of Ulysses S. Grant (1869–1877)
- 13 Presidency of Rutherford B. Hayes (1877–1881)
- 14 Presidency of James A. Garfield (1881–1881)
- 15 Presidency of Chester A. Arthur (1881–1885)
- 16 Presidency of Grover Cleveland (1885–1889)
- 17 Presidency of Benjamin Harrison (1889–1893)
- 18 Presidency of Grover Cleveland (1893–1897)
- 19 Presidency of William McKinley (1897–1901)
- 20 Presidency of Theodore Roosevelt (1901–1909)
- 21 Presidency of William Howard Taft (1909–1913)
- 22 Presidency of Woodrow Wilson (1913–1921)
- 23 Presidency of Warren G. Harding (1921–1923)
- 24 Presidency of Calvin Coolidge (1923–1929)
- 25 Presidency of Herbert Hoover (1929–1933)
- 26 Presidency of Franklin D. Roosevelt (1933–1945)
- 27 Presidency of Harry S. Truman (1945–1953)
- 28 Presidency of Dwight D. Eisenhower (1953–1961)
- 29 Presidency of John F. Kennedy (1961–1963)
- 30 Presidency of Lyndon B. Johnson (1963–1969)
- 31 Presidency of Richard Nixon (1969–1974)
- 32 Presidency of Gerald Ford (1974–1977)
- 33 Presidency of Jimmy Carter (1977–1981)
- 34 Presidency of Ronald Reagan (1981–1989)
- 35 Presidency of George H. W. Bush (1989–1993)
- 36 Presidency of Bill Clinton (1993–2001)
- 37 Presidency of George W. Bush (2001–2009)
- 38 Presidency of Barack Obama (2009–2017)
- 39 Presidency of Donald Trump (2017–present)
- 39.1 115th United States Congress
- 39.1.1 National Democratic Redistricting Committee
- 39.1.2 Protests against Donald Trump
- 39.1.3 Democratic Party PACs
- 39.1 115th United States Congress
- 40 See also
- 41 Notes
- 42 References
- 43 Further reading
- 44 External links
Presidency of John Quincy Adams (1825–1829)
The modern Democratic Party emerged in the late 1820s from former factions of the Democratic-Republican Party, which had largely collapsed by 1824. It was built by Martin Van Buren, who assembled a cadre of politicians in every state behind war hero Andrew Jackson of Tennessee.
Presidency of Andrew Jackson (1829–1837)
The spirit of Jacksonian democracy animated the party from the early 1830s to the 1850s, shaping the Second Party System, with the Whig Party the main opposition. After the disappearance of the Federalists after 1815 and the Era of Good Feelings (1816–1824), there was a hiatus of weakly organized personal factions until about 1828–1832, when the modern Democratic Party emerged along with its rival the Whigs. The new Democratic Party became a coalition of farmers, city-dwelling laborers and Irish Catholics.
Behind the party platforms, acceptance speeches of candidates, editorials, pamphlets and stump speeches, there was a widespread consensus of political values among Democrats. As Norton explains:
The Democrats represented a wide range of views but shared a fundamental commitment to the Jeffersonian concept of an agrarian society. They viewed the central government as the enemy of individual liberty. The 1824 "corrupt bargain" had strengthened their suspicion of Washington politics. [...] Jacksonians feared the concentration of economic and political power. They believed that government intervention in the economy benefited special-interest groups and created corporate monopolies that favored the rich. They sought to restore the independence of the individual – the artisan and the ordinary farmer – by ending federal support of banks and corporations and restricting the use of paper currency, which they distrusted. Their definition of the proper role of government tended to be negative, and Jackson's political power was largely expressed in negative acts. He exercised the veto more than all previous presidents combined. Jackson and his supporters also opposed reform as a movement. Reformers eager to turn their programs into legislation called for a more active government. But Democrats tended to oppose programs like educational reform mid the establishment of a public education system. They believed, for instance, that public schools restricted individual liberty by interfering with parental responsibility and undermined freedom of religion by replacing church schools. Nor did Jackson share reformers' humanitarian concerns. He had no sympathy for American Indians, initiating the removal of the Cherokees along the Trail of Tears.
The party was weakest in New England, but strong everywhere else and won most national elections thanks to strength in New York, Pennsylvania, Virginia (by far the most populous states at the time) and the American frontier. Democrats opposed elites and aristocrats, the Bank of the United States and the whiggish modernizing programs that would build up industry at the expense of the yeoman or independent small farmer.
Historian Frank Towers has specified an important ideological divide:
Democrats stood for the 'sovereignty of the people' as expressed in popular demonstrations, constitutional conventions, and majority rule as a general principle of governing, whereas Whigs advocated the rule of law, written and unchanging constitutions, and protections for minority interests against majority tyranny.
From 1828 to 1848, banking and tariffs were the central domestic policy issues. Democrats strongly favored—and Whigs opposed—expansion to new farm lands, as typified by their expulsion of eastern American Indians and acquisition of vast amounts of new land in the West after 1846. The party favored the war with Mexico and opposed anti-immigrant nativism. Both Democrats and Whigs were divided on the issue of slavery. In the 1830s, the Locofocos in New York City were radically democratic, anti-monopoly and were proponents of hard money and free trade. Their chief spokesman was William Leggett. At this time, labor unions were few and some were loosely affiliated with the party.
Presidency of Martin Van Buren (1837–1841)
Martin Van Buren was key in the development of the Democratic Party, although his presidency featured many setbacks. Many of the policies of Jackson had repercussions while Van Buren held office, such as the Trail of Tears. The policies enacted during Jackson came into full swing during Van Buren, who oversaw the displacement of thousands of Native Americans. In addition to this, Van Buren was anti-slavery and represented a divide in both parties. Despite this, he did almost nothing to help the abolitionist movement, and his presidency saw a continuation of pro-slavery legislation. Jackson's decision to abolish the Second Bank of the United States led to the Panic of 1837 during Van Buren's presidency, leading to disapproval by the public, and loss of power for his party.
Presidency of William H. Harrison & John Tyler (1841–1845)
The Panic of 1837 led to Van Buren and the Democrats' drop in popularity. The Whigs nominated William Henry Harrison as their candidate for the 1840 presidential race. Harrison won, as the first president of the Whigs. A month later he died in office and was succeeded by his vice president John Tyler. Tyler had recently left the Democrats for the Whigs, and because of this, his beliefs did not align much with the Whig party. During his presidency, he vetoed many of his own party's bills, leading to his own party disowning him. This allowed for the Democrats to retake power in 1845.
Presidency of James K. Polk (1845–1849)
Foreign policy was a major issue in the 1840s as war threatened with Mexico over Texas and with Britain over Oregon. Democrats strongly supported Manifest Destiny and most Whigs strongly opposed it. The 1844 election was a showdown, with the Democrat James K. Polk narrowly defeating Whig Henry Clay on the Texas issue.
John Mack Faragher's analysis of the political polarization between the parties is:
Most Democrats were wholehearted supporters of expansion, whereas many Whigs (especially in the North) were opposed. Whigs welcomed most of the changes wrought by industrialization but advocated strong government policies that would guide growth and development within the country's existing boundaries; they feared (correctly) that expansion raised a contentious issue the extension of slavery to the territories. On the other hand, many Democrats feared industrialization the Whigs welcomed. [...] For many Democrats, the answer to the nation's social ills was to continue to follow Thomas Jefferson's vision of establishing agriculture in the new territories in order to counterbalance industrialization.
Presidency of Zachary Taylor (1849–1850)
Presidency of Millard Fillmore (1850–1853)
The Democratic National Committee (DNC) was created in 1848 at the convention that nominated General Lewis Cass, who lost to General Zachary Taylor of the Whigs. A major cause of the defeat was that the new Free Soil Party, which opposed slavery expansion, split the Democratic Party, particularly in New York, where the electoral votes went to Taylor. Democrats in Congress passed the Compromise of 1850 designed to put the slavery issue to rest while resolves issued involving territories gained following the War with Mexico. However, in state after state the Democrats gained small but permanent advantages over the Whig Party, which finally collapsed in 1852, fatally weakened by division on slavery and nativism. The fragmented opposition could not stop the election of Democrats Franklin Pierce in 1852 and James Buchanan in 1856.
Presidency of Franklin Pierce (1853–1857)
Presidency of James Buchanan (1857–1861)
During 1858–1860, Senator Stephen A. Douglas confronted President Buchanan in a furious battle for control of the party. Douglas finally won, but his nomination signaled defeat for the Southern wing of the party and it walked out of the 1860 convention and nominated its own presidential ticket.
Yonatan Eyal (2007) argues that the 1840s and 1850s were the heyday of a new faction of young Democrats called "Young America". Led by Stephen A. Douglas, James K. Polk, Franklin Pierce and New York financier August Belmont, this faction explains, broke with the agrarian and strict constructionist orthodoxies of the past and embraced commerce, technology, regulation, reform and internationalism. The movement attracted a circle of outstanding writers, including William Cullen Bryant, George Bancroft, Herman Melville and Nathaniel Hawthorne. They sought independence from European standards of high culture and wanted to demonstrate the excellence and exceptionalism of America's own literary tradition.
In economic policy, Young America saw the necessity of a modern infrastructure with railroads, canals, telegraphs, turnpikes and harbors. They endorsed the "market revolution" and promoted capitalism. They called for Congressional land grants to the states, which allowed Democrats to claim that internal improvements were locally rather than federally sponsored. Young America claimed that modernization would perpetuate the agrarian vision of Jeffersonian democracy by allowing yeomen farmers to sell their products and therefore to prosper. They tied internal improvements to free trade, while accepted moderate tariffs as a necessary source of government revenue. They supported the Independent Treasury (the Jacksonian alternative to the Second Bank of the United States) not as a scheme to quash the special privilege of the Whiggish monied elite, but as a device to spread prosperity to all Americans.
Breakdown of the Second Party System (1854–1859)
Sectional confrontations escalated during the 1850s, the Democratic Party split between North and South grew deeper. The conflict was papered over at the 1852 and 1856 conventions by selecting men who had little involvement in sectionalism, but they made matters worse. Historian Roy F. Nichols explains why Franklin Pierce was not up to the challenges a Democratic president had to face:
- As a national political leader Pierce was an accident. He was honest and tenacious of his views but, as he made up his mind with difficulty and often reversed himself before making a final decision, he gave a general impression of instability. Kind, courteous, generous, he attracted many individuals, but his attempts to satisfy all factions failed and made him many enemies. In carrying out his principles of strict construction he was most in accord with Southerners, who generally had the letter of the law on their side. He failed utterly to realize the depth and the sincerity of Northern feeling against the South and was bewildered at the general flouting of the law and the Constitution, as he described it, by the people of his own New England. At no time did he catch the popular imagination. His inability to cope with the difficult problems that arose early in his administration caused him to lose the respect of great numbers, especially in the North, and his few successes failed to restore public confidence. He was an inexperienced man, suddenly called to assume a tremendous responsibility, who honestly tried to do his best without adequate training or temperamental fitness.
In 1854, over vehement opposition, the main Democratic leader in the Senate, Stephen Douglas of Illinois, pushed through the Kansas–Nebraska Act. It established that settlers in Kansas Territory could vote to decide to allow or not allow slavery. Thousands of men moved in from North and South with the goal of voting slavery down or up and their violence shook the nation. A major re-alignment took place among voters and politicians, with new issues, new parties and new leaders. The Whig Party dissolved entirely.
North and South pull apart
The crisis for the Democratic Party came in the late 1850s as Democrats increasingly rejected national policies demanded by the Southern Democrats. The demands were to support slavery outside the South. Southerners insisted that full equality for their region required the government to acknowledge the legitimacy of slavery outside the South. The Southern demands included a fugitive slave law to recapture runaway slaves; opening Kansas to slavery; forcing a pro-slavery constitution on Kansas; acquire Cuba (where slavery already existed); accepting the Dred Scott decision of the Supreme Court; and adopting a federal slave code to protect slavery in the territories. President Buchanan went along with these demands, but Douglas refused and proved a much better politician than Buchanan, though the bitter battle lasted for years and permanently alienated the Northern and Southern wings.
When the new Republican Party formed in 1854 on the basis of refusing to tolerate the expansion of slavery into the territories, many northern Democrats (especially Free Soilers from 1848) joined it. The Republicans in 1854 now had a majority in most, but not all of the Northern states and it had practically no support South of the Mason–Dixon line. The formation of the new short-lived Know-Nothing Party allowed the Democrats to win the presidential election of 1856. Buchanan, a Northern "Doughface" (his base of support was in the pro-slavery South), split the party on the issue of slavery in Kansas when he attempted to pass a federal slave code as demanded by the South. Most Democrats in the North rallied to Senator Douglas, who preached "Popular Sovereignty" and believed that a Federal slave code would be undemocratic.
The Democratic Party was unable to compete with the Republican Party, which controlled nearly all northern states by 1860, bringing a solid majority in the Electoral College. The Republicans claimed that the Northern Democrats, including Doughfaces such as Pierce and Buchanan, as well as advocates of popular sovereignty such as Stephen A. Douglas and Lewis Cass, were all accomplices to Slave Power. The Republicans argued that slaveholders (all of them Democrats) had seized control of the federal government and were blocking the progress of liberty.
In 1860, the Democrats were unable to stop the election of Republican Abraham Lincoln, even as they feared his election would lead to civil war. The Democrats split over the choice of a successor to President Buchanan along Northern and Southern lines: factions of the party provided two separate candidacies for President in the election of 1860, in which the Republican Party gained ascendancy.
Some Southern Democratic delegates followed the lead of the Fire-Eaters by walking out of the Democratic National Convention at Charleston's Institute Hall in April 1860 and were later joined by those who, once again led by the Fire-Eaters, left the Baltimore Convention the following June when the convention rejected a resolution supporting extending slavery into territories whose voters did not want it. The Southern Democrats nominated the pro-slavery incumbent Vice President, John C. Breckinridge of Kentucky, for President and General Joseph Lane, former governor of Oregon, for Vice President.
The Northern Democrats proceeded to nominate Douglas of Illinois for President and former Governor of Georgia Herschel Vespasian Johnson for Vice President, while some southern Democrats joined the Constitutional Union Party, backing its nominees (who had both been prominent Whig leaders), former Senator John Bell of Tennessee for President and the politician Edward Everett of Massachusetts for Vice President. This fracturing of the Democrats left them powerless. Republican Abraham Lincoln was elected the 16th President of the United States. Douglas campaigned across the country calling for unity and came in second in the popular vote, but carried only Missouri and New Jersey. Breckinridge carried 11 slave states, coming in second in the Electoral vote, but third in the popular vote.
Presidency of Abraham Lincoln (1861–1865)
During the Civil War, Northern Democrats divided into two factions: the War Democrats, who supported the military policies of President Lincoln; and the Copperheads, who strongly opposed them. No party politics were allowed in the Confederacy, whose political leadership, mindful of the welter prevalent in antebellum American politics and with a pressing need for unity, largely viewed political parties as inimical to good governance and as being especially unwise in wartime. Consequently, the Democratic Party halted all operations during the life of the Confederacy (1861–1865).
Partisanship flourished in the North and strengthened the Lincoln Administration as Republicans automatically rallied behind it. After the attack on Fort Sumter, Douglas rallied Northern Democrats behind the Union, but when Douglas died the party lacked an outstanding figure in the North and by 1862 an anti-war peace element was gaining strength. The most intense anti-war elements were the Copperheads. The Democratic Party did well in the 1862 congressional elections, but in 1864 it nominated General George McClellan (a War Democrat) on a peace platform and lost badly because many War Democrats bolted to National Union candidate Abraham Lincoln. Many former Democrats became Republicans, especially soldiers such as generals Ulysses S. Grant and John A. Logan.
Presidency of Andrew Johnson (1865–1869)
In the 1866 elections, the Radical Republicans won two-thirds majorities in Congress and took control of national affairs. The large Republican majorities made Congressional Democrats helpless, though they unanimously opposed the Radicals' Reconstruction policies. Realizing that the old issues were holding it back, the Democrats tried a "New Departure" that downplayed the War and stressed such issues as corruption and white supremacy.
Presidency of Ulysses S. Grant (1869–1877)
Presidency of Rutherford B. Hayes (1877–1881)
Presidency of James A. Garfield (1881–1881)
President Garfield's early death from an assassin led to both parties being willing to accept more civil service reform.
Presidency of Chester A. Arthur (1881–1885)
The Democrats lost consecutive presidential elections from 1860 through 1880 (1876 was in dispute) and did not win the presidency until 1884. The party was weakened by its record of opposition to the war, but nevertheless benefited from White Southerners' resentment of Reconstruction and consequent hostility to the Republican Party. The nationwide depression of 1873 allowed the Democrats to retake control of the House in the 1874 Democratic landslide.
The Redeemers gave the Democrats control of every Southern state (by the Compromise of 1877), but the disenfranchisement of blacks took place (1880–1900). From 1880 to 1960, the "Solid South" voted Democratic in presidential elections (except 1928). After 1900, a victory in a Democratic primary was "tantamount to election" because the Republican Party was so weak in the South.
Presidency of Grover Cleveland (1885–1889)
Although Republicans continued to control the White House until 1884, the Democrats remained competitive (especially in the mid-Atlantic and lower Midwest) and controlled the House of Representatives for most of that period. In the election of 1884, Grover Cleveland, the reforming Democratic Governor of New York, won the Presidency, a feat he repeated in 1892, having lost in the election of 1888.
Cleveland was the leader of the Bourbon Democrats. They represented business interests, supported banking and railroad goals, promoted laissez-faire capitalism, opposed imperialism and U.S. overseas expansion, opposed the annexation of Hawaii, fought for the gold standard and opposed Bimetallism. They strongly supported reform movements such as Civil Service Reform and opposed corruption of city bosses, leading the fight against the Tweed Ring.
The leading Bourbons included Samuel J. Tilden, David Bennett Hill and William C. Whitney of New York, Arthur Pue Gorman of Maryland, Thomas F. Bayard of Delaware, Henry M. Mathews and William L. Wilson of West Virginia, John Griffin Carlisle of Kentucky, William F. Vilas of Wisconsin, J. Sterling Morton of Nebraska, John M. Palmer of Illinois, Horace Boies of Iowa, Lucius Quintus Cincinnatus Lamar of Mississippi and railroad builder James J. Hill of Minnesota. A prominent intellectual was Woodrow Wilson.
Presidency of Benjamin Harrison (1889–1893)
Presidency of Grover Cleveland (1893–1897)
The Bourbons were in power when the Panic of 1893 hit and they took the blame. A fierce struggle inside the party ensued, with catastrophic losses for both the Bourbon and agrarian factions in 1894, leading to the showdown in 1896. Just before the 1894 election, President Cleveland was warned by an advisor:
- We are on the eve of very dark night, unless a return of commercial prosperity relieves popular discontent with what they believe Democratic incompetence to make laws, and consequently with Democratic Administrations anywhere and everywhere.
The warning was appropriate, for the Republicans won their biggest landslide in decades, taking full control of the House, while the Populists lost most of their support. However, Cleveland's factional enemies gained control of the Democratic Party in state after state, including full control in Illinois and Michigan and made major gains in Ohio, Indiana, Iowa and other states. Wisconsin and Massachusetts were two of the few states that remained under the control of Cleveland's allies. The opposition Democrats were close to controlling two thirds of the vote at the 1896 national convention, which they needed to nominate their own candidate. However, they were not united and had no national leader, as Illinois Governor John Peter Altgeld had been born in Germany and was ineligible to be nominated for president.
Presidency of William McKinley (1897–1901)
Religious divisions were sharply drawn. Methodists, Congregationalists, Presbyterians, Scandinavian Lutherans and other pietists in the North were closely linked to the Republican Party. In sharp contrast, liturgical groups, especially the Catholics, Episcopalians and German Lutherans, looked to the Democratic Party for protection from pietistic moralism, especially prohibition. Both parties cut across the class structure, with the Democrats gaining more support from the lower classes and Republicans more support from the upper classes.
Cultural issues, especially prohibition and foreign language schools, became matters of contention because of the sharp religious divisions in the electorate. In the North, about 50 percent of voters were pietistic Protestants (Methodists, Scandinavian Lutherans, Presbyterians, Congregationalists and Disciples of Christ) who believed the government should be used to reduce social sins, such as drinking.
Liturgical churches (Roman Catholics, German Lutherans and Episcopalians) comprised over a quarter of the vote and wanted the government to stay out of the morality business. Prohibition debates and referendums heated up politics in most states over a period of decade, as national prohibition was finally passed in 1918 (repealed in 1932), serving as a major issue between the wet Democrats and the dry Republicans.
The Free Silver Movement
Grover Cleveland led the party faction of conservative, pro-business Bourbon Democrats, but as the depression of 1893 deepened his enemies multiplied. At the 1896 convention, the silverite-agrarian faction repudiated the President and nominated the crusading orator William Jennings Bryan on a platform of free coinage of silver. The idea was that minting silver coins would flood the economy with cash and end the depression. Cleveland supporters formed the National Democratic Party (Gold Democrats), which attracted politicians and intellectuals (including Woodrow Wilson and Frederick Jackson Turner) who refused to vote Republican.
Bryan, an overnight sensation because of his "Cross of Gold" speech, waged a new-style crusade against the supporters of the gold standard. Criss-crossing the Midwest and East by special train – he was the first candidate since 1860 to go on the road – he gave over 500 speeches to audiences in the millions. In St. Louis he gave 36 speeches to workingmen's audiences across the city, all in one day. Most Democratic newspapers were hostile toward Bryan, but he seized control of the media by making the news every day as he hurled thunderbolts against Eastern monied interests.
The rural folk in the South and Midwest were ecstatic, showing an enthusiasm never before seen, but ethnic Democrats (especially Germans and Irish) were alarmed and frightened by Bryan. The middle classes, businessmen, newspaper editors, factory workers, railroad workers and prosperous farmers generally rejected Bryan's crusade. Republican William McKinley promised a return to prosperity based on the gold standard, support for industry, railroads and banks and pluralism that would enable every group to move ahead.
Although Bryan lost the election in a landslide, he did win the hearts and minds of a majority of Democrats, as shown by his renomination in 1900 and 1908. As late as 1924, the Democrats put his brother Charles W. Bryan on their national ticket. The victory of the Republican Party in the election of 1896 marked the start of the "Progressive Era", which lasted from 1896 to 1932, in which the Republican Party usually was dominant.
Presidency of Theodore Roosevelt (1901–1909)
The 1896 election marked a political realignment in which the Republican Party controlled the presidency for 28 of 36 years. The Republicans dominated most of the Northeast and Midwest and half the West. Bryan, with a base in the South and Plains states, was strong enough to get the nomination in 1900 (losing to William McKinley) and 1908 (losing to William Howard Taft). Theodore Roosevelt dominated the first decade of the century and to the annoyance of Democrats "stole" the trust issue by crusading against trusts.
Anti-Bryan conservatives controlled the convention in 1904, but faced a Theodore Roosevelt landslide. Bryan dropped his free silver and anti-imperialism rhetoric and supported mainstream progressive issues, such as the income tax, anti-trust and direct election of Senators.
Presidency of William Howard Taft (1909–1913)
The Democratic Party benefited from the Taft-Roosevelt Republican split during Taft's term, electing the first Democratic President and fully Democratic Congress in 20 years.
Presidency of Woodrow Wilson (1913–1921)
Taking advantage of a deep split in the Republican Party, the Democrats took control of the House in 1910 and elected the intellectual reformer Woodrow Wilson in 1912 and 1916. Wilson successfully led Congress to a series of progressive laws, including a reduced tariff, stronger antitrust laws, new programs for farmers, hours-and-pay benefits for railroad workers and the outlawing of child labor (which was reversed by the Supreme Court).
Wilson tolerated the segregation of the federal Civil Service by Southern cabinet members. Furthermore, bipartisan constitutional amendments for prohibition and women's suffrage were passed in his second term. In effect, Wilson laid to rest the issues of tariffs, money and antitrust that had dominated politics for 40 years.
Wilson oversaw the U.S. role in World War I and helped write the Versailles Treaty, which included the League of Nations. However, in 1919 Wilson's political skills faltered and suddenly everything turned sour. The Senate rejected Versailles and the League, a nationwide wave of violent, unsuccessful strikes and race riots caused unrest and Wilson's health collapsed.
The Democrats lost by a huge landslide in 1920, doing especially poorly in the cities, where the German-Americans deserted the ticket; and the Irish Catholics, who dominated the party apparatus, sat on their hands.
Presidency of Warren G. Harding (1921–1923)
Although they recovered considerable ground in the Congressional elections of 1922, the entire decade saw the Democrats as a helpless minority in Congress and as a weak force in most Northern states.
Presidency of Calvin Coolidge (1923–1929)
At the 1924 Democratic National Convention, a resolution denouncing the Ku Klux Klan was introduced by forces allied with Al Smith and Oscar W. Underwood in order to embarrass the front-runner, William Gibbs McAdoo. After much debate, the resolution failed by a single vote. The KKK faded away soon after, but the deep split in the party over cultural issues, especially prohibition, facilitated Republican landslides in 1920, 1924 and 1928. However, Al Smith did build a strong Catholic base in the big cities in 1928 and Franklin D. Roosevelt's election as Governor of New York that year brought a new leader to center stage.
Presidency of Herbert Hoover (1929–1933)
The Great Depression marred Hoover's term as the Democratic Party made large gains in the 1930 congressional elections and garnered a landslide win in 1932.
Presidency of Franklin D. Roosevelt (1933–1945)
The stock market crash of 1929 and the ensuing Great Depression set the stage for a more progressive government and Franklin D. Roosevelt won a landslide victory in the election of 1932, campaigning on a platform of "Relief, Recovery, and Reform", that is relief of unemployment and rural distress, recovery of the economy back to normal and long-term structural reforms to prevent a repetition of the Depression. This came to be termed "The New Deal" after a phrase in Roosevelt's acceptance speech.
The Democrats also swept to large majorities in both houses of Congress and among state governors. Roosevelt altered the nature of the party, away from laissez-faire capitalism and towards an ideology of economic regulation and insurance against hardship. Two old words took on new meanings: "liberal" now meant a supporter of the New Deal while "conservative" meant an opponent.
Conservative Democrats were outraged and led by Al Smith they formed the American Liberty League in 1934 and counterattacked. They failed and either retired from politics or joined the Republican Party. A few of them, such as Dean Acheson, found their way back to the Democratic Party.
The 1933 programs, called "the First New Deal" by historians, represented a broad consensus. Roosevelt tried to reach out to business and labor, farmers and consumers, cities and countryside. However, by 1934 he was moving toward a more confrontational policy. After making gains in state governorships and in Congress, in 1934 Roosevelt embarked on an ambitious legislative program that came to be called "The Second New Deal". It was characterized by building up labor unions, nationalizing welfare by the WPA, setting up Social Security, imposing more regulations on business (especially transportation and communications) and raising taxes on business profits.
Roosevelt's New Deal programs focused on job creation through public works projects as well as on social welfare programs such as Social Security. It also included sweeping reforms to the banking system, work regulation, transportation, communications and stock markets, as well as attempts to regulate prices. His policies soon paid off by uniting a diverse coalition of Democratic voters called the New Deal coalition, which included labor unions, Southerners, minorities (most significantly, Catholics and Jews) and liberals. This united voter base allowed Democrats to be elected to Congress and the presidency for much of the next 30 years.
After a triumphant re-election in 1936, he announced plans to enlarge the Supreme Court, which tended to oppose his New Deal, by five new members. A firestorm of opposition erupted, led by his own Vice President John Nance Garner. Roosevelt was defeated by an alliance of Republicans and conservative Democrats, who formed a conservative coalition that managed to block nearly all liberal legislation (only a minimum wage law got through). Annoyed by the conservative wing of his own party, Roosevelt made an attempt to rid himself of it and in 1938 he actively campaigned against five incumbent conservative Democratic senators, though all five senators won re-election.
Under Roosevelt, the Democratic Party became identified more closely with modern liberalism, which included the promotion of social welfare, labor unions, civil rights and the regulation of business. The opponents, who stressed long-term growth and support for entrepreneurship and low taxes, now started calling themselves "conservatives".
Presidency of Harry S. Truman (1945–1953)
Harry S. Truman took over after Roosevelt's death in 1945 and the rifts inside the party that Roosevelt had papered over began to emerge. Major components included the big city machines, the Southern state and local parties, the far-left and the "Liberal coalition" or "Liberal-Labor coalition" comprising the AFL, CIO and ideological groups such as the NAACP (representing Blacks), the American Jewish Congress (AJC) and the Americans for Democratic Action (ADA) (representing liberal intellectuals). By 1948, the unions had expelled nearly all the far-left and communist elements.
On the right, the Republicans blasted Truman's domestic policies. "Had Enough?" was the winning slogan as Republicans recaptured Congress in 1946 for the first time since 1928.
Many party leaders were ready to dump Truman in 1948, but after General Dwight D. Eisenhower rejected their invitation they lacked an alternative. Truman counterattacked, pushing J. Strom Thurmond and his Dixiecrats out, as well as taking advantage of the splits inside the Republican Party and was thus reelected in a stunning surprise. However, all of Truman's Fair Deal proposals, such as universal health care, were defeated by the Southern Democrats in Congress. His seizure of the steel industry was reversed by the Supreme Court.
On the far-left, former Vice President Henry A. Wallace denounced Truman as a war-monger for his anti-Soviet programs, the Truman Doctrine, Marshall Plan and NATO. Wallace quit the party and ran for President as an independent in 1948. He called for détente with the Soviet Union, but much of his campaign was controlled by communists who had been expelled from the main unions. Wallace fared poorly and helped turn the anti-communist vote toward Truman.
By cooperating with internationalist Republicans, Truman succeeded in defeating isolationists on the right and supporters of softer lines on the Soviet Union on the left to establish a Cold War program that lasted until the fall of the Soviet Union in 1991. Wallace supporters and other Democrats who were farther left were pushed out of the party and the CIO in 1946–1948 by young anti-communists like Hubert Humphrey, Walter Reuther and Arthur Schlesinger Jr. Hollywood emerged in the 1940s as an important new base in the party and was led by movie-star politicians such as Ronald Reagan, who strongly supported Roosevelt and Truman at this time.
In foreign policy, Europe was safe, but troubles mounted in Asia as China fell to the communists in 1949. Truman entered the Korean War without formal Congressional approval. When the war turned to a stalemate and he fired General Douglas MacArthur in 1951, Republicans blasted his policies in Asia. A series of petty scandals among friends and buddies of Truman further tarnished his image, allowing the Republicans in 1952 to crusade against "Korea, Communism and Corruption". Truman dropped out of the Presidential race early in 1952, leaving no obvious successor. The convention nominated Adlai Stevenson in 1952 and 1956, only to see him overwhelmed by two Eisenhower landslides.
In Congress, the powerful duo of House Speaker Sam Rayburn and Senate Majority leader Lyndon B. Johnson held the party together, often by compromising with Eisenhower. In 1958, the party made dramatic gains in the midterms and seemed to have a permanent lock on Congress, thanks largely to organized labor. Indeed, Democrats had majorities in the House every election from 1930 to 1992 (except 1946 and 1952).
Most Southern Congressmen were conservative Democrats and they usually worked with conservative Republicans. The result was a conservative coalition that blocked practically all liberal domestic legislation from 1937 to the 1970s, except for a brief spell 1964–1965, when Johnson neutralized its power. The counterbalance to the conservative coalition was the Democratic Study Group, which led the charge to liberalize the institutions of Congress and eventually pass a great deal of the Kennedy–Johnson program.
Presidency of Dwight D. Eisenhower (1953–1961)
Presidency of John F. Kennedy (1961–1963)
The election of John F. Kennedy in 1960 over then-Vice President Richard Nixon re-energized the party. His youth, vigor and intelligence caught the popular imagination. New programs like the Peace Corps harnessed idealism. In terms of legislation, Kennedy was stalemated by the conservative coalition.
Though Kennedy's term in office lasted only about a thousand days, he tried to hold back communist gains after the failed Bay of Pigs invasion in Cuba and the construction of the Berlin Wall and sent 16,000 soldiers to Vietnam to advise the hard-pressed South Vietnamese army. He challenged America in the Space Race to land an American man on the moon by 1969. After the Cuban Missile Crisis he moved to de-escalate tensions with the Soviet Union.
Kennedy also pushed for civil rights and racial integration, one example being Kennedy assigning federal marshals to protect the Freedom Riders in the South. His election did mark the coming of age of the Catholic component of the New Deal Coalition. After 1964, middle class Catholics started voting Republican in the same proportion as their Protestant neighbors. Except for the Chicago of Richard J. Daley, the last of the Democratic machines faded away. President Kennedy was assassinated on November 22, 1963 in Dallas, Texas.
Presidency of Lyndon B. Johnson (1963–1969)
Then-Vice President Lyndon B. Johnson was sworn in as the new President. Johnson, heir to the New Deal ideals, broke the conservative coalition in Congress and passed a remarkable number of laws, known as the Great Society. Johnson succeeded in passing major civil rights laws that restarted racial integration in the South. At the same time, Johnson escalated the Vietnam War, leading to an inner conflict inside the Democratic Party that shattered the party in the elections of 1968.
The Democratic Party platform of the 1960s was largely formed by the ideals of President Johnson's "Great Society" The New Deal coalition began to fracture as more Democratic leaders voiced support for civil rights, upsetting the party's traditional base of Southern Democrats and Catholics in Northern cities. After Harry Truman's platform gave strong support to civil rights and anti-segregation laws during the 1948 Democratic National Convention, many Southern Democratic delegates decided to split from the party and formed the "Dixiecrats", led by South Carolina governor Strom Thurmond (who as Senator would later join the Republican Party). However, few other Democrats left the party.
On the other hand, African Americans, who had traditionally given strong support to the Republican Party since its inception as the "anti-slavery party", continued to shift to the Democratic Party, largely due to the advocacy of and support for civil rights by such prominent Democrats as Hubert Humphrey and former First Lady Eleanor Roosevelt; and to a lesser extent economic opportunities offered by the New Deal relief programs. Although Republican Dwight D. Eisenhower carried half the South in 1952 and 1956 and Senator Barry Goldwater also carried five Southern states in 1964, Democrat Jimmy Carter carried all of the South except Virginia and there was no long-term realignment until Ronald Reagan's sweeping victories in the South in 1980 and 1984.
The party's dramatic reversal on civil rights issues culminated when Democratic President Lyndon B. Johnson signed into law the Civil Rights Act of 1964. The act was passed in both House and Senate by a Republican majority. Many of the Democrats, mostly Southern Democrats opposed the act. Meanwhile, the Republicans led again by Richard Nixon were beginning to implement their new economic policies which aimed to resist federal encroachment on the states, while appealing to conservative and moderate in the rapidly growing cities and suburbs of the South.
The year 1968 marked a major crisis for the party. In January, even though it was a military defeat for the Viet Cong, the Tet Offensive began to turn American public opinion against the Vietnam War. Senator Eugene McCarthy rallied intellectuals and anti-war students on college campuses and came within a few percentage points of defeating Johnson in the New Hampshire primary:Johnson was permanently weakened. Four days later, Senator Robert Kennedy, brother of the late President, entered the race.
Johnson stunned the nation on March 31 when he withdrew from the race and four weeks later his Vice President Hubert H. Humphrey, entered the race, though he did not run in any primary. Kennedy and McCarthy traded primary victories while Humphrey gathered the support of labor unions and the big-city bosses. Kennedy won the critical California primary on June 4, but he was assassinated that night. Even as Kennedy won California, Humphrey had already amassed 1,000 of the 1,312 delegate votes needed for the nomination, while Kennedy had about 700).
During the 1968 Democratic National Convention, while police and the National Guard violently confronted anti-war protesters on the streets and parks of Chicago, the Democrats nominated Humphrey. Meanwhile, Alabama's Democratic governor George C. Wallace launched a third-party campaign and at one point was running second to the Republican candidate Richard Nixon. Nixon barely won, with the Democrats retaining control of Congress. The party was now so deeply split that it would not again win a majority of the popular vote for president until 1976, when Jimmy Carter won the popular vote in 1976 with 50.1%.
The degree to which the Southern Democrats had abandoned the party became evident in the 1968 presidential election when the electoral votes of every former Confederate state except Texas went to either Republican Richard Nixon or independent Wallace. Humphrey's electoral votes came mainly from the Northern states, marking a dramatic reversal from the 1948 election 20 years earlier, when the losing Republican electoral votes were concentrated in the same states.
Presidency of Richard Nixon (1969–1974)
Following the 1968 debacle, the McGovern-Fraser Commission proposed and the party adopted far-reaching changes in how national convention delegates were selected. More power over the presidential nominee selection accrued to the rank and file and presidential primaries became significantly more important. In 1972, the Democrats nominated Senator George McGovern (SD) as the presidential candidate on a platform which advocated, among other things, immediate U.S. withdrawal from Vietnam (with his anti-war slogan "Come Home, America!") and a guaranteed minimum income for all Americans. McGovern's forces at the national convention ousted Mayor Richard J. Daley and the entire Chicago delegation, replacing them with insurgents led by Jesse Jackson. After it became known that McGovern's running mate Thomas Eagleton had received electric shock therapy, McGovern said he supported Eagleton "1000%", but he was soon forced to drop him and find a new running mate.
Numerous top names turned him down, but McGovern finally selected Sargent Shriver, a Kennedy in-law who was close to Mayor Daley. On July 14, 1972, McGovern appointed his campaign manager, Jean Westwood, as the first woman chair of the Democratic National Committee. McGovern was defeated in a landslide by incumbent Richard Nixon, winning only Massachusetts and Washington, D.C.
Presidency of Gerald Ford (1974–1977)
The sordid Watergate scandal soon destroyed the Nixon Presidency, giving the Democrats a flicker of hope. With Gerald Ford's pardon of Nixon soon after his resignation in 1974, the Democrats used the "corruption" issue to make major gains in the off-year elections. In 1976, mistrust of the administration, complicated by a combination of economic recession and inflation, sometimes called "stagflation", led to Ford's defeat by Jimmy Carter, a former Governor of Georgia. Carter won as a little-known outsider by promising honesty in Washington, a message that played well to voters as he swept the South and won narrowly.
Presidency of Jimmy Carter (1977–1981)
Carter had served as a naval officer, a farmer, a state senator and a one-term governor. His only experience with federal politics was when he chaired the Democratic National Committee's congressional and gubernatorial elections in 1974. Some of Carter's major accomplishments consisted of the creation of a national energy policy and the consolidation of governmental agencies, resulting in two new cabinet departments, the United States Department of Energy and the United States Department of Education. Carter also successfully deregulated the trucking, airline, rail, finance, communications and oil industries (thus backtracking on the New Deal approach to regulation of the economy), bolstered the social security system and appointed record numbers of women and minorities to significant government and judicial posts. He also enacted strong legislation on environmental protection through the expansion of the National Park Service in Alaska, creating 103 million acres (417,000 km²) of park land.
In foreign affairs, Carter's accomplishments consisted of the Camp David Accords, the Panama Canal Treaties, the establishment of full diplomatic relations with the People's Republic of China and the negotiation of the SALT II Treaty. In addition, he championed human rights throughout the world and used human rights as the center of his administration's foreign policy.
Even with all of these successes, Carter failed to implement a national health plan or to reform the tax system as he had promised in his campaign and inflation was also on the rise. Abroad, the Iranians held 52 Americans hostage for 444 days and Carter's diplomatic and military rescue attempts failed. The Soviet invasion of Afghanistan later that year further disenchanted some Americans with Carter. In 1980, Carter defeated Senator Ted Kennedy to gain renomination, but lost to Ronald Reagan in November. The Democrats lost 12 Senate seats and for the first time since 1954 the Republicans controlled the Senate, though the House remained in Democratic hands. After his defeat, Carter negotiated the release of every American hostage held in Iran and they were lifted out of Iran minutes after Reagan was inaugurated, ending a 444-day crisis.
Presidency of Ronald Reagan (1981–1989)
1980s: battling Reaganism
Democrats who supported many conservative policies were instrumental in the election of Republican President Ronald Reagan in 1980. The "Reagan Democrats" were Democrats before the Reagan years and afterward, but they voted for Ronald Reagan in 1980 and 1984 and for George H. W. Bush in 1988, producing their landslide victories. Reagan Democrats were mostly white ethnics in the Northeast and Midwest who were attracted to Reagan's social conservatism on issues such as abortion and to his strong foreign policy. They did not continue to vote Republican in 1992 or 1996, so the term fell into disuse except as a reference to the 1980s. The term is not used to describe White Southerners who became permanent Republicans in presidential elections.
Stan Greenberg, a Democratic pollster, analyzed white ethnic voters – largely unionized auto workers – in suburban Macomb County, Michigan, just north of Detroit. The county voted 63 percent for Kennedy in 1960 and 66 percent for Reagan in 1984. He concluded that Reagan Democrats no longer saw Democrats as champions of their middle class aspirations, but instead saw it as a party working primarily for the benefit of others, especially African Americans, advocacy groups of the political left and the very poor.
The failure to hold the Reagan Democrats and the white South led to the final collapse of the New Deal coalition. In 1984, Reagan carried 49 states against former Vice President and Minnesota Senator Walter Mondale, a New Deal stalwart.
In response to these landslide defeats, the Democratic Leadership Council (DLC) was created in 1985. It worked to move the party rightwards to the ideological center in order to recover some of the fundraising that had been lost to the Republicans due to corporate donors supporting Reagan. The goal was to retain left-of-center voters as well as moderates and conservatives on social issues to become a catch all party with widespread appeal to most opponents of the Republicans. Despite this, Massachusetts Governor Michael Dukakis, running not as a New Dealer but as an efficiency expert in public administration, lost by a landslide in 1988 to Vice President George H. W. Bush.
The South becomes Republican
For nearly a century after Reconstruction, the white South identified with the Democratic Party. The Democrats' lock on power was so strong the region was called the Solid South, although the Republicans controlled parts of the Appalachian mountains and they competed for statewide office in the border states. Before 1948, Southern Democrats believed that their party, with its respect for states' rights and appreciation of traditional southern values, was the defender of the Southern way of life. Southern Democrats warned against aggressive designs on the part of Northern liberals and Republicans and civil rights activists whom they denounced as "outside agitators".
The adoption of the strong civil rights plank by the 1948 convention and the integration of the armed forces by President Harry S. Truman's Executive Order 9981, which provided for equal treatment and opportunity for African-American servicemen, drove a wedge between the Northern and Southern branches of the party. The party was sharply divided in the following election, as Southern Democrats Strom Thurmond ran as "States' Rights Democratic Party".
With the presidency of John F. Kennedy the Democratic Party began to embrace the Civil Rights Movement and its lock on the South was irretrievably broken. Upon signing the Civil Rights Act of 1964, President Lyndon B. Johnson prophesied: "We have lost the South for a generation".
Modernization had brought factories, national businesses and larger, more cosmopolitan cities such as Atlanta, Dallas, Charlotte and Houston to the South, as well as millions of migrants from the North and more opportunities for higher education. Meanwhile, the cotton and tobacco economy of the traditional rural South faded away, as former farmers commuted to factory jobs. As the South became more like the rest of the nation, it could not stand apart in terms of racial segregation.
Integration and the Civil Rights Movement caused enormous controversy in the white South, with many attacking it as a violation of states' rights. When segregation was outlawed by court order and by the Civil Rights Acts of 1964 and 1965, a die-hard element resisted integration, led by Democratic governors Orval Faubus of Arkansas, Lester Maddox of Georgia and especially George Wallace of Alabama. These populist governors appealed to a less-educated, blue-collar electorate that on economic grounds favored the Democratic Party and opposed desegregation. After 1965, most Southerners accepted integration (with the exception of public schools).
Believing themselves betrayed by the Democratic Party, traditional White Southerners joined the new middle-class and the Northern transplants in moving toward the Republican Party. Meanwhile, newly enfranchised black voters began supporting Democratic candidates at the 80-90-percent levels, producing Democratic leaders such as Julian Bond and John Lewis of Georgia and Barbara Jordan of Texas. Just as Martin Luther King had promised, integration had brought about a new day in Southern politics. The Republican Party's Southern strategy further alienated black voters from the party.
In addition to its white middle-class base, Republicans attracted strong majorities among evangelical Christians, who prior to the 1980s were largely apolitical. Exit polls in the 2004 presidential election showed that George W. Bush led John Kerry by 70–30% among White Southerners, who comprised 71% of the voters. Kerry had a 90–9 lead among the 18% of Southern voters who were black. One-third of the Southern voters said they were white Evangelicals and they voted for Bush by 80–20.
Presidency of George H. W. Bush (1989–1993)
Opposition to Gulf War
The Democrats included a strong element that came of age in opposition to the Vietnam War and remained hostile toward American military interventions. On August 1, 1990, Iraq, led by Saddam Hussein, invaded Kuwait. President Bush formed an international coalition and secured United Nations approval to expel Iraq. Congress on January 12, 1991 authorized by a narrow margin the use of military force against Iraq, with Republicans in favor and Democrats opposed. The vote in the House was 250–183 and in the Senate 52-47. In the Senate, 42 Republicans and 10 Democrats voted yes to war, while 45 Democrats and two Republicans voted no. In the House, 164 Republicans and 86 Democrats voted yes and 179 Democrats, three Republicans and one Independent voted no. The Gulf War, a military operation known as "Desert Storm", was short and successful, but Hussein was allowed to remain in power. The Arab countries (and Japan) repaid all the American military costs.
Presidency of Bill Clinton (1993–2001)
In the 1990s, the Democratic Party revived itself, in part by moving to the right on economic policy. In 1992, for the first time in 12 years the United States had a Democrat in the White House. During President Bill Clinton's term, the Congress balanced the federal budget for the first time since the Kennedy Presidency and presided over a robust American economy that saw incomes grow across the board. In 1994, the economy had the lowest combination of unemployment and inflation in 25 years. President Clinton also signed into law several gun control bills, including the Brady Bill, which imposed a five-day waiting period on handgun purchases; and he also signed into legislation a ban on many types of semi-automatic firearms (which expired in 2004). His Family and Medical Leave Act, covering some 40 million Americans, offered workers up to 12 weeks of unpaid, job-guaranteed leave for childbirth or a personal or family illness. He deployed the U.S. military to Haiti to reinstate deposed president Jean-Bertrand Aristide, took a strong hand in Palestinian-Israeli peace negotiations, brokered a historic cease-fire in Northern Ireland and negotiated the Dayton accords. In 1996, Clinton became the first Democratic President to be re-elected since Franklin D. Roosevelt.
However, the Democrats lost their majority in both Houses of Congress in 1994. Clinton vetoed two Republican-backed welfare reform bills before signing the third, the Personal Responsibility and Work Opportunity Act of 1996. The tort reform Private Securities Litigation Reform Act passed over his veto. Labor unions, which had been steadily losing membership since the 1960s, found they had also lost political clout inside the Democratic Party and Clinton enacted the North American Free Trade Agreement with Canada and Mexico over unions' strong objections. In 1998, the Republican-led House of Representatives impeached Clinton on two charges, though he was subsequently acquitted by the United States Senate in 1999. Under Clinton's leadership, the United States participated in NATO's Operation Allied Force against Yugoslavia that year.
In the 1990s the Clinton Administration continued the free market, or neoliberal, reforms which began under the Reagan Administration. However, economist Sebastian Mallaby argues that the party increasingly adopted pro-business, pro free market principles after 1976:
- Free-market ideas were embraced by Democrats almost as much as by Republicans. Jimmy Carter initiated the big push toward deregulation, generally with the support of his party in Congress. Bill Clinton presided over the growth of the loosely supervised shadow financial system and the repeal of Depression-era restrictions on commercial banks.
Historian Walter Scheidel also posits that both parties shifted to free markets in the 1970s:
- In the United States, both of the dominant parties have shifted toward free-market capitalism. Even though analysis of roll call votes show that since the 1970s, Republicans have drifted farther to the right than Democrats have moved to the left, the latter were instrumental in implementing financial deregulation in the 1990s and focused increasingly on cultural issues such as gender, race, and sexual identity rather than traditional social welfare policies.
As the DLC attempted to move the Democratic agenda to the right (to a more centrist position), prominent Democrats from both the centrist and conservative factions (such as Terry McAuliffe) assumed leadership of the party and its direction. Some liberals and progressives felt alienated by the Democratic Party, which they felt had become unconcerned with the interests of the common people and left-wing issues in general. Some Democrats challenged the validity of such critiques, citing the Democratic role in pushing for progressive reforms.
Election of 2000
During the 2000 presidential election, the Democrats chose Vice President Al Gore to be the party's candidate for the Presidency. Gore ran against George W. Bush, the Republican candidate and son of former President George H.W. Bush. The issues Gore championed include debt reduction, tax cuts, foreign policy, public education, global warming, judicial appointments and affirmative action. Nevertheless, Gore's affiliation with Clinton and the DLC caused critics to assert that Bush and Gore were too similar, especially on free trade, reductions in social welfare and the death penalty. Green Party presidential candidate Ralph Nader in particular was very vocal in his criticisms.
Gore won a popular plurality of over 540,000 votes over Bush, but lost in the Electoral College by four votes. Many Democrats blamed Nader's third-party spoiler role for Gore's defeat. They pointed to the states of New Hampshire (4 electoral votes) and Florida (25 electoral votes), where Nader's total votes exceeded Bush's margin of victory. In Florida, Nader received 97,000 votes and Bush defeated Gore by a mere 537. Controversy plagued the election and Gore largely dropped from politics for years, though by 2005 he was making speeches critical of Bush's foreign policy.
Despite Gore's close defeat, the Democrats gained five seats in the Senate (including the election of Hillary Clinton in New York) to turn a 55–45 Republican edge into a 50–50 split (with a Republican Vice President breaking a tie). However, when Republican Senator Jim Jeffords of Vermont decided in 2001 to become an independent and vote with the Democratic Caucus, the majority status shifted along with the seat, including control of the floor (by the Majority Leader) and control of all committee chairmanships. However, the Republicans regained their Senate majority with gains in 2002 and 2004, leaving the Democrats with only 44 seats, the fewest since the 1920s.
Presidency of George W. Bush (2001–2009)
In the aftermath of the September 11, 2001 attacks, the nation's focus was changed to issues of national security. All but one Democrat (Representative Barbara Lee) voted with their Republican counterparts to authorize President Bush's 2001 invasion of Afghanistan. House leader Richard Gephardt and Senate leader Thomas Daschle pushed Democrats to vote for the USA PATRIOT Act and the invasion of Iraq. The Democrats were split over entering Iraq in 2003 and increasingly expressed concerns about both the justification and progress of the War on Terrorism, as well as the domestic effects, including threats to civil rights and civil liberties, from the Patriot act. Senator Russ Feingold was the only Senator to vote against the act.
In the wake of the financial fraud scandal of the Enron Corporation and other corporations, Congressional Democrats pushed for a legal overhaul of business accounting with the intention of preventing further accounting fraud. This led to the bipartisan Sarbanes-Oxley Act in 2002. With job losses and bankruptcies across regions and industries increasing in 2001 and 2002, the Democrats generally campaigned on the issue of economic recovery. That did not work for them in 2002, as the Democrats lost a few seats in the U.S. House of Representatives.
They lost three seats in the Senate (Georgia as Max Cleland was unseated, Minnesota as Paul Wellstone died and his succeeding Democratic candidate lost the election and Missouri as Jean Carnahan was unseated) in the Senate. While Democrats gained governorships in New Mexico (where Bill Richardson was elected), Arizona (Janet Napolitano), Michigan (Jennifer Granholm) and Wyoming (Dave Freudenthal). Other Democrats lost governorships in South Carolina (Jim Hodges), Alabama (Don Siegelman) and—for the first time in more than a century—Georgia (Roy Barnes).
The election led to another round of soul searching about the party's narrowing base. Democrats had further losses 2003, when a voter recall unseated the unpopular Democratic governor of California Gray Davis and replaced him with Republican Arnold Schwarzenegger. By the end of 2003, the four most populous states had Republican governors: California, Texas, New York and Florida.
Election of 2004
The 2004 campaign started as early as December 2002, when Gore announced he would not run again in the 2004 election. Howard Dean, former Governor of Vermont, an opponent of the war and a critic of the Democratic establishment, was the front-runner leading into the Democratic primaries. Dean had immense grassroots support, especially from the left-wing of the party. Massachusetts Senator John Kerry, a more centrist figure with heavy support from the Democratic Leadership Council, was nominated because he was seen as more "electable" than Dean.
As layoffs of American workers occurred in various industries due to outsourcing, some Democrats (including Dean and senatorial candidate Erskine Bowles of North Carolina) began to refine their positions on free trade and some even questioned their past support for it. By 2004, the failure of George W. Bush's administration to find weapons of mass destruction in Iraq, mounting combat casualties and fatalities in the ongoing Iraq War, as well as the lack of any end point for the War on Terror were frequently debated issues in the election. That year, Democrats generally campaigned on surmounting the jobless recovery, solving the Iraq crisis and fighting terrorism more efficiently.
In the end, Kerry lost both the popular vote (by 3 million out of over 120 million votes cast) and the Electoral College. Republicans also gained four seats in the Senate (leaving the Democrats with only 44 seats, their fewest since the 1920s) and three seats in the House of Representatives. Also for the first time since 1952, the Democratic leader of the Senate lost re-election. In the end, there were 3,660 Democratic state legislators across the nation to the Republicans' 3,557. Democrats gained governorships in Louisiana, New Hampshire and Montana. However, they lost the governorship of Missouri and a legislative majority in Georgia – which had long been a Democratic stronghold. Senate pickups for the Democrats included Ken Salazar in Colorado and 2004 Democratic National Convention keynote speaker Barack Obama in Illinois.
There were many reasons for the defeat and after the election most analysts concluded that Kerry was a poor campaigner. A group of Vietnam veterans opposed to Kerry called the Swift Boat Veterans for Truth undercut Kerry's use of his military past as a campaign strategy. Kerry was unable to reconcile his initial support of the Iraq War with his opposition to the war in 2004 or manage the deep split in the Democratic Party between those who favored and opposed the war.
Republicans ran thousands of television commercials to argue that Kerry had flip-flopped on Iraq. When Kerry's home state of Massachusetts legalized same-sex marriage, the issue split liberal and conservative Democrats and independents (Kerry publicly stated throughout his campaign that he opposed same-sex marriage, but favored civil unions). Republicans exploited the same-sex marriage issue by promoting ballot initiatives in 11 states that brought conservatives to the polls in large numbers: all 11 initiatives passed.
Flaws in vote-counting systems may also have played a role in Kerry's defeat (see 2004 United States election voting controversies). Senator Barbara Boxer of California and several Democratic U.S. Representatives (including John Conyers of Michigan) raised the issue of voting irregularities in Ohio when the 109th Congress first convened, but they were defeated 267–31 by the House and 74-1 by the Senate. Other factors included a healthy job market, a rising stock market, strong home sales and low unemployment.
After the 2004 election, prominent Democrats began to rethink the party's direction and a variety of strategies for moving forward were voiced. Some Democrats proposed moving towards the right to regain seats in the House and Senate and possibly win the Presidency in the election of 2008, while others demanded that the party move more to the left and become a stronger opposition party. One topic of discussion was the party's policies surrounding reproductive rights.
Rethinking the party's position on gun control became a matter of discussion, brought up by Howard Dean, Bill Richardson, Brian Schweitzer and other Democrats who had won governorships in states where Second Amendment rights were important to many voters. In What's the Matter with Kansas?, commentator Thomas Frank wrote that the Democrats needed to return to campaigning on economic populism.
Howard Dean and the fifty-state strategy (2005–2007)
These debates were reflected in the 2005 campaign for Chairman of the Democratic National Committee, which Howard Dean won over the objections of many party insiders. Dean sought to move the Democratic strategy away from the establishment and bolster support for the party's state organizations, even in red states (the fifty-state strategy).
When the 109th Congress convened, Harry Reid, the new Senate Minority Leader, tried to convince the Democratic Senators to vote more as a bloc on important issues and he forced the Republicans to abandon their push for privatization of Social Security. In 2005, the Democrats retained their governorships in Virginia and New Jersey, electing Tim Kaine and Jon Corzine, respectively. However, the party lost the mayoral race in New York City, a Democratic stronghold, for the fourth straight time.
With scandals involving lobbyist Jack Abramoff as well as Duke Cunningham, Tom DeLay, Mark Foley and Bob Taft, the Democrats used the slogan "Culture of corruption" against the Republicans during the 2006 campaign. Negative public opinion on the Iraq War, widespread dissatisfaction over the ballooning federal deficit and the inept handling of the Hurricane Katrina disaster dragged down President Bush's job approval ratings.
As a result of the 2006 midterm elections, the Democratic Party became the majority party in the House of Representatives and its caucus in the United States Senate constituted a majority when the 110th Congress convened in 2007. The Democrats had spent twelve successive years as the minority party in the House before the 2006 mid-term elections. The Democrats also went from controlling a minority of governorships to a majority. The number of seats held by party members likewise increased in various state legislatures, giving the Democrats control of a plurality of them nationwide. No Democratic incumbent was defeated and no Democratic-held open seat was lost in either the U.S. Senate, U.S. House, or with regards to any governorship.
The Democratic Party's electoral success has been attributed by some to running conservative-leaning Democrats against at-risk Republican incumbents, while others claim that running more populists and progressive candidates has been the source of success. Exit polling suggested that corruption was a key issue for many voters.
In the 2006 Democratic caucus leadership elections, Democrats chose Representative Steny Hoyer of Maryland for House Majority Leader and nominated Representative Nancy Pelosi of California for speaker. Senate Democrats chose Harry Reid of Nevada for United States Senate Majority Leader. Pelosi was elected as the first female House speaker at the commencement of the 110th Congress. The House soon passed the measures that comprised the Democrats' 100-Hour Plan.
2008 presidential election
The 2008 Democratic presidential primaries left two candidates in close competition: Illinois Senator Barack Obama and New York Senator Hillary Clinton. Both had won more support within a major American political party than any previous African American or female candidate. Before official ratification at the 2008 Democratic National Convention, Obama emerged as the party's presumptive nominee. With President George W. Bush of the Republican Party ineligible for a third term and the Vice President Dick Cheney not pursuing his party's nomination, Senator John McCain of Arizona more quickly emerged as the GOP nominee.
Throughout most of the 2008 general election, polls showed a close race between Obama and John McCain. However, Obama maintained a small but widening lead over McCain in the wake of the liquidity crisis of September 2008.
On November 4, Obama defeated McCain by a significant margin in the Electoral College and the party also made further gains in the Senate and House, adding to its 2006 gains.
Presidency of Barack Obama (2009–2017)
On January 20, 2009, Obama was inaugurated as the 44th president of the United States in a ceremony attended by nearly 2 million people, the largest congregation of spectators ever to witness the inauguration of a new President. That same day in Washington, D.C., Republican House of Representative leaders met in an "invitation only" meeting for four hours to discuss the future of the Republican Party under the Obama administration. During the meeting, they agreed to bring Congress to a standstill regardless of how much it would hurt the American economy by pledging to obstruct and block President Obama on all legislation.
One of the first acts by the Obama administration after assuming control was an order signed by Chief of Staff Rahm Emanuel that suspended all pending federal regulations proposed by outgoing President George W. Bush so that they could be reviewed. This was comparable to prior moves by the Bush administration upon assuming control from Bill Clinton, who in his final 20 days in office issued 12 executive orders. In his first week, Obama also established a policy of producing a weekly Saturday morning video address available on Whitehouse.gov and YouTube, much like those released during his transition period. The policy is likened to Franklin Delano Roosevelt's fireside chats and George W. Bush's weekly radio addresses.
President Obama signed into law the following significant legislation during his first 100 days in the White House: Lilly Ledbetter Fair Pay Act of 2009, Children's Health Insurance Reauthorization Act of 2009 and the American Recovery and Reinvestment Act of 2009. Also during his first 100 days, the Obama administration reversed the following significant George W. Bush administration policies: supporting the UN declaration on sexual orientation and gender identity, relaxing enforcement of cannabis laws and lifting the 7½-year ban on federal funding for embryonic stem cell research. Obama also issued Executive Order 13492, ordering the closure of the Guantanamo Bay detention camp, although it has remained open throughout his presidency. He also lifted some travel and money restrictions to the island, ended the Mexico City Policy and signed an order requiring the Army Field Manual to be used as guide for terror interrogations, which banned torture and other coercive techniques, such as waterboarding.
Obama also announced stricter guidelines regarding lobbyists in an effort to raise the ethical standards of the White House. The new policy bans aides from attempting to influence the administration for at least two years if they leave his staff. It also bans aides on staff from working on matters they have previously lobbied on, or to approach agencies that they targeted while on staff. Their ban also included a gift-giving ban. However, one day later he nominated William J. Lynn III, a lobbyist for defence contractor Raytheon, for the position of Deputy Secretary of Defense. Obama later nominated William Corr, an anti-tobacco lobbyist, for Deputy Secretary of Health and Human Services.
During the beginning of Obama Presidency emerged the Tea Party movement, a conservative movement that began to heavily influence the Republican Party within the United States, shifting the GOP further right-wing and partisan in their ideology. On February 18, 2009, Obama announced that the U.S. military presence in Afghanistan would be bolstered by 17,000 new troops by summer. The announcement followed the recommendation of several experts including Defense Secretary Robert Gates that additional troops be deployed to the strife-torn South Asian country. On February 27, 2009, Obama addressed Marines at Camp Lejeune, North Carolina and outlined an exit strategy for the Iraq War. Obama promised to withdraw all combat troops from Iraq by August 31, 2010 and a "transitional force" of up to 50,000 counterterrorism, advisory, training and support personnel by the end of 2011.
Obama signed two presidential memorandum concerning energy independence, ordering the Department of Transportation to establish higher fuel efficiency standards before 2011 models are released and allowing states to raise their emissions standards above the national standard. Due to the economic crisis, the President enacted a pay freeze for senior White House staff making more than $100,000 per year. The action affected approximately 120 staffers and added up to about a $443,000 savings for the United States government. On March 10, 2009, in a meeting with the New Democrat Coalition, Obama told them that he was a "New Democrat", "pro-growth Democrat", "supports free and fair trade" and "very concerned about a return to protectionism".
On May 26, 2009, President Obama nominated Sonia Sotomayor for Associate Justice of the Supreme Court of the United States. Sotomayor was confirmed by the Senate becoming the highest ranking government official of Puerto Rican heritage ever. On July 1, 2009, President Obama signed into law the Comprehensive Iran Sanctions, Accountability, and Divestment Act of 2010. On July 7, 2009, Al Franken was sworn into the Senate, thus Senate Democrats obtained the 60 vote threshold to overcome the Senate filibuster.
On October 28, 2009, Obama signed the National Defense Authorization Act for Fiscal Year 2010, which included in it the Matthew Shepard and James Byrd, Jr. Hate Crimes Prevention Act, which expanded federal hate crime laws to include sexual orientation, gender identity and disability. On January 21, 2010, the Supreme Court ruled in a 5–4 decision in the case of Citizens United v. Federal Election Commission that the First Amendment prohibited the government from restricting independent political expenditures by a nonprofit corporation. On February 4, 2010, Republican Scott Brown of Massachusetts was sworn into the Senate, thus ending Senate Democrats 60 vote threshold to overcome a filibuster.
On March 23, 2010, President Obama signed into law his signature legislation of his presidency, the Patient Protection and Affordable Care Act, together with the Health Care and Education Reconciliation Act of 2010, which represents the most significant regulatory overhaul of the U.S. healthcare system since the passage of Medicare and Medicaid in 1965. On May 10, 2010, President Obama nominated Elena Kagan for Associate Justice of the Supreme Court of the United States. On July 21, 2010, President Obama signed into law the Dodd–Frank Wall Street Reform and Consumer Protection Act and Elena Kagan was confirmed by the Senate on August 5, 2010 by a 63–37 vote. Kagan was sworn in by Chief Justice John Roberts on August 7, 2010.
On 19 August 2010, the 4th Stryker Brigade, 2nd Infantry Division was the last American combat brigade to withdraw from Iraq. In a speech at the Oval Office on 31 August 2010, Obama declared: "[T]he American combat mission in Iraq has ended. Operation Iraqi Freedom is over, and the Iraqi people now have lead responsibility for the security of their country". About 50,000 American troops remained in the country in an advisory capacity as part of "Operation New Dawn", which ran until the end of 2011. New Dawn was the final designated U.S. campaign of the war. The U.S. military continued to train and advise the Iraqi Forces, as well as participate in combat alongside them.
On November 2, 2010 during the 2010 midterm elections, the Democratic Party had a net loss of six seats in the Senate and 63 seats in the House. Control of the House of Representatives switched from the Democratic Party to the Republican Party. The Democrats lost a net of six state governorships and a net 680 seats in state legislatures. The Democrats lost control of seven state Senate legislatures and 13 state Houses. This was the worst performance of the Democratic Party in a national election since the 1946 elections. The Blue Dog Coalition numbers in the House were reduced from 54 members in 2008 to 26 members in 2011 and were half of the Democratic defeats during the election. This was the first United States national election in which Super PACs were used by Democrats and Republicans. Many commentators contribute the electoral success of the Republican Party in 2010 to the conservative Super PACs' campaign spending, Tea Party movement, backlash against President Obama, failure to mobilize the Obama coalition to get out and vote and the failure of President Obama to enact many of his progressive and liberal campaign promises.
On December 1, 2010, Obama announced at the U.S. Military Academy in West Point that the U.S. would send 30,000 more troops. Anti-war organizations in the U.S. responded quickly and cities throughout the U.S. saw protests on 2 December. Many protesters compared the decision to deploy more troops in Afghanistan to the expansion of the Vietnam War under the Johnson administration.
During the lameduck session of the 111th United States Congress, President Obama signed into law the following significant legislation: Tax Relief, Unemployment Insurance Reauthorization, and Job Creation Act of 2010, Don't Ask, Don't Tell Repeal Act of 2010, James Zadroga 9/11 Health and Compensation Act of 2010, Shark Conservation Act of 2010 and the FDA Food Safety Modernization Act of 2010. On December 18, 2010, the Arab Spring began. On 22 December 2010, the U.S. Senate gave its advice and consent to ratification of New START by a vote of 71 to 26 on the resolution of ratification. The 111th United States Congress has been considered one of the most productive Congresses in history in terms of legislation passed since the 89th Congress, during Lyndon Johnson's Great Society.
On February 23, 2011, United States Attorney General Eric Holder announced the United States federal government would no longer defend the Defense of Marriage Act within federal courts. In response to the First Libyan Civil War, Secretary of State Hillary Clinton joined with U.N. Ambassador Susan Rice and Office of Multilateral and Human Rights Director Samantha Power led the hawkish diplomatic team within the Obama administration that helped convince President Obama in favor airstrikes against Libyan government. On March 19, 2011, the United States began military intervention in Libya.
United States domestic reaction to the 2011 military intervention in Libya were mixed in the Democratic Party. Opponents to the 2011 military intervention in Libya within the Democratic Party include Rep. Dennis Kucinich, Sen. Jim Webb, Rep. Raul Grijalva, Rep. Mike Honda, Rep. Lynn Woolsey and Rep. Barbara Lee. The Congressional Progressive Caucus (CPC), an organization of progressive Democrats, said that the United States should conclude its campaign against Libyan air defenses as soon as possible. Support for the 2011 military intervention in Libya within the Democratic Party include President Bill Clinton, Sen. Carl Levin, Sen. Dick Durbin, Sen. Jack Reed, Sen. John Kerry, Minority Leader of the House of Representatives Nancy Pelosi, Legal Adviser of the Department of State Harold Hongju Koh and Ed Schultz.
On April 5, 2011, Vice President Joe Biden announced that Debbie Wasserman Schultz was President Obama's choice to succeed Tim Kaine as the 52nd Chair of the Democratic National Committee. On May 26, 2011, President Obama signed the PATRIOT Sunsets Extension Act of 2011, which was strongly criticized by some in the Democratic Party as violation of civil liberties and a continuation of the George W. Bush administration. House Democrats largely opposed the PATRIOT Sunsets Extension Act of 2011, while Senate Democrats were slightly in favor of it.
On October 21, 2011, President Obama signed into law three of the following United States free trade agreements: Free trade agreement between the United States of America and the Republic of Korea, Panama–United States Trade Promotion Agreement and the United States–Colombia Free Trade Agreement. In the House of Representatives, Democratic Representatives largely opposed these agreements, while Senate Democrats were split on the agreements. This was a continuation of President Bill Clinton's policy of support for free trade agreements.
When asked by David Gregory about his views on same-sex marriage on Meet the Press on May 5, 2012, Biden stated he supported same-sex marriage. On May 9, 2012, a day after North Carolina voters approved Amendment 1, President Obama became the first sitting United States President to come out in favor of same-sex marriage.
The 2012 Democratic Party platform for Obama's reelection ran over 26,000 words and included his position on numerous national issues. On security issues, it pledges "unshakable commitment to Israel's security", says the party will try to prevent Iran from acquiring a nuclear weapon. It calls for a strong military, but argues that in the current fiscal environment, tough budgetary decisions must include defense spending. On controversial social issues it supports abortion rights, same-sex marriage and says the party is "strongly committed to enacting comprehensive immigration reform". On the economic side the platform calls for extending the tax cuts for families earning under $250,000 and promises not to raise their taxes. It praises the Patient Protection and Affordable Care Act ("Obamacare", but does not use that term). It "adamantly oppose any efforts to privatize Medicare". On the rules of politics it attacks the recent Supreme Court decision Citizens United v. Federal Election Commission that allows much greater political spending. It demands "immediate action to curb the influence of lobbyists and special interests on our political institutions".
Intense budget negotiations in the divided 112th Congress, wherein Democrats resolved to fight Republican demands for decreased spending and no tax hikes, threatened to shut down the government in April 2011 and later spurred fears that the United States would default on its debt. Continuing tight budgets were felt at the state level, where public-sector unions, a key Democratic constituency, battled Republican efforts to limit their collective bargaining powers in order to save money and reduce union power. This led to sustained protests by public-sector employees and walkouts by sympathetic Democratic legislators in states like Wisconsin and Ohio. The 2011 "Occupy movement". a campaign on the left for more accountable economic leadership, failed to have the impact on Democratic Party leadership and policy that the Tea Party movement had on the Republicans. ts leadership proved ineffective and the Occupy movement fizzled out. However, echoes could be found in the presidential nomination campaign of Senator Bernie Sanders in 2015–2016.
Conservatives criticized the president for "passive" responses to crises such as the 2009 Iranian protests and the 2011 Egyptian revolution. Additionally, liberal and Democratic activists objected to Obama's decisions to send reinforcements to Afghanistan, resume military trials of terror suspects at Guantanamo Bay and to help enforce a no-fly zone over Libya during that country's civil war. However, the demands of anti-war advocates were heeded when Obama followed through on a campaign promise to withdraw combat troops from Iraq.
The 2012 election was characterized by very high spending, especially on negative television ads in about ten critical states. Despite a weak economic recovery and high unemployment, the Obama campaign successfully mobilized its coalition of youth, blacks, Hispanics and women. The campaign carried all the same states as in 2008 except two, Indiana and North Carolina. The election continued the pattern whereby Democrats won more votes in all presidential elections after 1988, except for 2004. Obama and the Democrats lost control of the Senate in the 2014 midterm elections, losing nine seats in that body and 13 in the GOP House.
2016 United States elections
2016 United States presidential election
2016 Democratic Party presidential primaries
National polling from 2013 to the summer of 2015 showed Hillary Clinton with an overwhelming commanding lead over all of her potential primary opponents. Her main challenger was independent Vermont Senator Bernie Sanders, whose rallies grew larger and larger as he attracted overwhelming majorities among Democrats under age 40. The sharp divide between the two candidates was the establishment versus the political outsider, with Clinton being the establishment candidate and Sanders the outsider. Clinton received the endorsements from an overwhelming majority of office holders. Clinton's core base voters during the primary was women, African Americans, Latino Americans, LGBTs, moderates and older voters, while Sanders' core base included younger voters under age 40, men and progressives.
The ideological differences between the two candidates represented the ideological divide within the Democratic Party as a whole. Clinton, who cast herself as a moderate and a progressive, is ideologically more of a centrist, representing the Third Way, New Democrat wing of the Democratic Party, as Bill Clinton and Barack Obama did. Bernie Sanders, who remained an independent in the Senate throughout the primaries (despite running for President as a Democrat), is a self described democratic socialist, and is ideologically more of a progressive or social-democrat representing the progressive/populist wing of the Democratic Party, which includes politicians such as Elizabeth Warren.
During the primaries, Sanders attacked Clinton for her ties to Wall Street and her previous support of the Defense of Marriage Act, the Trans-Pacific Partnership, the North American Free Trade Agreement, the Keystone Pipeline, the 2011 military intervention in Libya and the Iraq War, while Clinton attacked Sanders for voting against the Brady Handgun Violence Prevention Act, the Commodity Futures Modernization Act of 2000, the Protection of Lawful Commerce in Arms Act and the Comprehensive Immigration Reform Act of 2007. Clinton generally moved to the left as the campaign progressed, and adopted variations of some of Sanders' themes, such as opinions regarding trade and college tuition. Although she was generally favored to win in polls, she lost the general election to Donald Trump in the Electoral College, despite winning the popular vote.
Presidency of Donald Trump (2017–present)
115th United States Congress
As of September 13, 2017, 16 Senate Democrats cosponsored the Medicare for All Act of 2017. As of September 26, 2017, 120 House Democrats cosponsored the Expanded & Improved Medicare For All Act.
National Democratic Redistricting Committee
On January 12, 2017, the National Democratic Redistricting Committee, a 527 organization that focuses on redistricting reform and is affiliated with the Democratic Party. The chair, president and vice president of the umbrella organization is the 82th Attorney General Eric Holder, Elizabeth Pearson and Alixandria "Ali" Lapp respectively. President Obama has said he would be involved with the committee.
Protests against Donald Trump
Inauguration of Donald Trump
At the inauguration of Donald Trump, 67 Democratic members of the United States House of Representatives boycotted the inauguration. This was the largest boycott by members of the United States Congress since the second inauguration of Richard Nixon, where it was estimated that between 80 and 200 Democratic members of United States Congress boycotted.
2017 Donald Trump speech to joint session of Congress
At the 2017 Donald Trump speech to joint session of Congress, Representative Maxine Waters and Supreme Court Justice Ruth Bader Ginsburg both did not attend.
2018 State of the Union Address
As of January 19, 2018, 6 Democratic members of the United States House of Representatives will boycott the 2018 State of the Union Address
Democratic Party PACs
On January 23, 2017, Justice Democrats, political action committee, was created by Cenk Uygur of The Young Turks, Kyle Kulinski of Secular Talk, Saikat Chakrabarti and Zack Exley (both former leadership from the former 2016 Bernie Sanders presidential campaign). The organization, formed as a result of the 2016 United States presidential election, has a stated goal of reforming the Democratic Party by running "a unified campaign to replace every corporate-backed member of Congress and rebuild the [Democratic] party from scratch" starting in the 2018 Congressional midterms.
On January 17, 2017, Third Way, a public policy think tank, launched New Blue, a $20 million campaign to study Democratic short comings in the 2016 elections and offer a new economic agenda to help Democrats reconnect with the voters who have abandoned the party. The money will be spent to conduct extensive research, reporting and polling in Rust Belt states that once formed a Blue Wall, but which voted for President Donald Trump in 2016. Many progressives have criticized this as a desperate measure for the so-called establishment wing of the party to retain leadership.
On May 15, 2017, Onward Together, a political action organization was launched by Hillary Clinton to fundraise for liberal organizations, such as Swing Left, Indivisible, Color of Change, Emerge America, and Run for Something,.
2017 United States elections
2017 Democratic National Committee elections
The 2017 Democratic National Committee chairmanship election was characterized primarily as being between the two candidates for the chairmanship, United States Representative for Minnesota's 5th congressional district Keith Ellison and 26th United States Secretary of Labor Tom Perez. On February 25, 2017, Perez won the Democratic National Committee chairmanship and named Keith Ellison as Deputy Chair of the Democratic National Committee, a newly created position. The Obama administration pushed for Tom Perez to run against Keith Ellison, and President Obama personally called DNC members to encourage them to vote for Perez.
- United States politics
- Witcover, Jules (2003), "Chapter 1", Party of the People: A History of the Democrats
- Micklethwait, John; Wooldridge, Adrian (2004). The Right Nation: Conservative Power in America. p. 15. "The country possesses the world's oldest written constitution (1787); the Democratic Party has a good claim to being the world's oldest political party."
- Kenneth Janda; Jeffrey M. Berry; Jerry Goldman (2010). The Challenge of Democracy: American Government in Global Politics. Cengage Learning. p. 276.
- Theodore Caplow; Howard M. Bahr; Bruce A. Chadwick; John Modell (1994). Recent Social Trends in the United States, 1960–1990. McGill-Queen's Press. p. 337. They add: "The Republican party, nationally, moved from right-center toward the center in 1940s and 1950s, then moved right again in the 1970s and 1980s.
- Robert V. Remini, Martin Van Buren and the Making of the Democratic Party (1959).
- Sean Wilentz, The Rise of American Democracy: Jefferson to Lincoln (2005)
- Mary Beth Norton et al., A People and a Nation, Volume I: to 1877 (Houghton Mifflin, 2007) p. 287
- John Ashworth, "Agrarians" & "Aristocrats": Party Political Ideology in the United States, 1837–1846 (1983).
- Frank Towers, "Mobtown's Impact on the Study of Urban Politics in the Early Republic." Maryland Historical Magazine 107 (Winter 2012) pp. 469–75, p. 472, citing Robert E, Shalhope, The Baltimore Bank Riot: Political Upheaval in Antebellum Maryland (2009) p. 147.
- Earle (2004), p. 19
- Taylor (2006), p. 54
- Sean Wilentz, Chants Democratic: New York City and the Rise of the American Working Class, 1788–1850 (1984)
- Daniel Walker Howe, What Hath God Wrought: The Transformation of America 1815–1848 (2007) pp. 705–706.
- John Mack Faragher et al. Out of Many: A History of the American People (2nd ed. 1997) p. 413
- Wilentz, The Rise of American Democracy: Jefferson to Lincoln (2005) ch. 21–22.
- David M. Potter, The Impending Crisis, 1848–1861 (1976) ch. 15–16.
- Yonatan Eyal, The Young America Movement and the Transformation of the Democratic Party, 1828–1861, (2007)
- Eyal, The Young America Movement and the Transformation of the Democratic Party, 1828–1861, p. 79
- Roy F. Nichols, "Franklin Pierce," Dictionary of American Biography (1934) reprinted in Nancy Capace, ed. (2001). Encyclopedia of New Hampshire. pp. 268–69.
- William E. Gienapp, The Origins of the Republican Party, 1852–1856 (1987) explores statistically the flow of voters between parties in the 1850s.
- Michael Todd Landis. Northern Men with Southern Loyalties: The Democratic Party and the Sectional Crisis (2014).
- Roy F. Nichols. The Disruption of American Democracy: A History of the Political Crisis That Led Up To The Civil War (1948).
- Leonard Richards. The Slave Power: The Free North and Southern Domination, 1780–1860 (2000).
- A. James Fuller, ed., The Election of 1860 Reconsidered (2012) online
- David M. Potter. The Impending Crisis, 1848–1861 (1976). ch. 16.
- Jennifer L. Weber, Copperheads: The Rise and Fall of Lincoln's Opponents in the North (2006)
- Jack Waugh, Reelecting Lincoln: The Battle for the 1864 Presidency (1998)
- Patrick W. Riddleberger, 1866: The Critical Year Revisited (1979)
- Edward Gambill, Conservative Ordeal: Northern Democrats and Reconstruction, 1865–1868 (1981).
- Addkison-Simmons, D. (2010). Henry Mason Mathews. e-WV: The West Virginia Encyclopedia. Retrieved December 11, 2012, from "Henry Mason Mathews".
- Francis Lynde Stetson to Cleveland, October 7, 1894 in Allan Nevins, ed. Letters of Grover Cleveland, 1850–1908 (1933) p. 369
- Richard J. Jensen, The Winning of the Midwest: Social and Political Conflict, 1888–96 (1971) pp. 229–30
- Kleppner (1979)
- Stanley L. Jones, The Presidential Election of 1896 (1964)
- Richard J. Jensen, The Winning of the Midwest: Social and Political Conflict 1888–1896 (1971) free online edition
- Michael Kazin, A Godly Hero: The Life of William Jennings Bryan (2006)
- Lewis L. Gould, America in the Progressive Era, 1890–1914 (2001)
- R. Hal Williams, Realigning America: McKinley, Bryan, and the Remarkable Election of 1896 (2010)
- Brett Flehinger, The 1912 Election and the Power of Progressivism: A Brief History with Documents (2002)
- John Milton Cooper, Woodrow Wilson: A Biography (2009)
- John Milton Cooper, Breaking the Heart of the World: Woodrow Wilson and the Fight for the League of Nations (2001).
- Douglas B. Craig, After Wilson: The Struggle for the Democratic Party, 1920–1934 (1992).
- Robert K. Murray, The 103rd Ballot: Democrats and Disaster in Madison Square Garden (1976)
- Jerome M. Clubb and Howard W. Allen, "The Cities and the Election of 1928: Partisan Realignment?," American Historical Review Vol. 74, No. 4 (Apr., 1969), pp. 1205–20 in JSTOR
- Daniel Disalvo. "The Politics of a Party Faction: The Liberal-Labor Alliance in the Democratic Party, 1948–1972," Journal of Policy History (2010) vol. 22#3 pp. 269–99 in Project MUSE
- Max M. Kampelman, The Communist Party vs. the C.I.O.: a study in power politics (1957) ch. 11.
- Tim McNeese, The Cold War and Postwar America 1946–1963 (2010) p. 39.
- Robert A. Divine, "The Cold War and the Election of 1948," Journal of American History Vol. 59, No. 1 (Jun., 1972), pp. 90–110 in JSTOR.
- Palermo (2001).
- Theodore H. White, The Making of the President 1972 (1973).
- Bruce Miroff, The Liberals' Moment: The McGovern Insurgency and the Identity Crisis of the Democratic Party (University Press of Kansas, 2007).
- Jules Witcover, Marathon: The pursuit of the presidency, 1972–1976 (1977).
- Gary M. Fink, and Hugh Davis Graham, eds. The Carter presidency: Policy choices in the post-New Deal era (University Press of Kansas, 1998).
- John Dumbrell, The Carter presidency: A re-evaluation (Manchester University Press, 1995)
- David Farber, Taken Hostage: The Iran Hostage Crisis and America's First Encounter with Radical Islam (2005)
- Stanley B. Greenberg, Middle Class Dreams: Politics and Power of the New American Majority (1996).
- Steven M. Gillon, The Democrats' Dilemma: Walter F. Mondale and the Liberal Legacy (1992) pp. 365–90
- Jack W. Germond and Jules Witcover. Whose Broad Stripes and Bright Stars? (1989)
- Risen, Clay (March 5, 2006). "How the South was won". The Boston Globe. Retrieved November 24, 2006.
- "Exit Polls". CNN. November 2, 2004. Retrieved November 18, 2006.
- Dilip Hiro, Desert Shield to Desert Storm: The Second Gulf War (2003) p. 300
- Kilborn, Peter T. (November 19, 1993). "The Free Trade Accord: Labor, Unions Vow to Punish Pact's Backers". The New York Times. Retrieved November 17, 2006.
- Springer, Simon; Birch, Kean; MacLeavy, Julie, eds. (2016). The Handbook of Neoliberalism. Routledge. p. 144. ISBN 978-1138844001.
- Nikolaos Karagiannis, Zagros Madjd-Sadjadi, Swapan Sen (eds). The US Economy and Neoliberalism: Alternative Strategies and Policies. Routledge, 2013. ISBN 1138904910. p. 58.
- Sebastian Mallaby. The New York Times. "Why We Deregulated the Banks". July 29, 2011.
- Scheidel, Walter (2017). The Great Leveler: Violence and the History of Inequality from the Stone Age to the Twenty-First Century. Princeton University Press. p. 416. ISBN 978-0691165028.
- It was renewed in 2006 by a vote of 280–138 in the House (with Democrats breaking 66 for and 124 against) and 89-10 in the Senate (with Democrats splitting 33 in favor and 9 against). "House approves Patriot Act renewal," CNN News, March 7, 2006.
- Mahajan, Rahul (January 28, 2004). "Kerry vs. Dean; New Hampshire vs. Iraq". Common Dreams NewsCenter. Archived from the original on October 19, 2006. Retrieved October 12, 2006.
- Thomas, Evan, Clift, Eleanor, Staff of Newsweek (2005).Election 2004: How Bush Won and What You Can Expect in the Future. PublicAffairs. ISBN 1-58648-293-9.
- Kelly, Jack (September 5, 2004). "Kerry's Fall From Grace". Pittsburgh Post-Gazette. Retrieved October 10, 2006. See also: Last, Jonathan V. (November 12, 2004). "Saving John Kerry". The Weekly Standard. Retrieved October 10, 2006.
- Wenner, Jann S. (November 17, 2004). "Why Bush Won". Rolling Stone. Retrieved October 10, 2006.
- "Interview with Howard Dean". This Week. January 23, 2005. American Broadcasting Company (ABC). Retrieved on October 11, 2006.
- Hook, Janet (October 26, 2006). "A right kind of Democrat". Los Angeles Times. See also: Dewan, Shaila; Kornblut, Anne E. (October 30, 2006). "In Key House Races, Democrats Run to the Right". The New York Times. Retrieved November 10, 2006.
- Toner, Robin (November 12, 2006). "Incoming Democrats Put Populism Before Ideology". The New York Times. Retrieved April 24, 2007. Burt, Nick; Bleifuss, Joel (November 8, 2006). "Progressive Caucus Rising". In These Times. Retrieved February 15, 2007. Bacon Jr., Perry; Cox, Ana Marie; Tumulty, Karen (November 16, 2006). "5 Myths About the Midterm Elections". Time Magazine. Retrieved February 11, 2007. Bazinet, Kenneth R. (November 19, 2006). "Hil's no dump Dean fan". New York Daily News. Archived from the original on October 6, 2008. Retrieved February 11, 2007.
- "Corruption named as key issue by voters in exit polls". CNN. November 8, 2006. Retrieved January 25, 2007.
- Haynes Johnson and Dan Balz, The Battle for America 2008: The Story of an Extraordinary Election (2009).
- Ruane, Michael E.; Davis, Aaron C. (January 22, 2009). "D.C.'s Inauguration Head Count: 1.8 Million". The Washington Post. Retrieved May 4, 2010.
- "Obama halts all regulations pending review". Associated Press. January 20, 2009. Retrieved February 1, 2009.
- Memmott, Mark (January 21, 2009). "Obama freezing pay of top staff; signs ethics rules". USA Today. Retrieved February 1, 2009.
- Loven, Jennifer (January 21, 2009). "Obama freezes salaries of some White House aides". Yahoo! News. Yahoo! Inc./The Associated Press. Archived from the original on February 5, 2009. Retrieved February 1, 2009.
- "Obama breaks his own rule". CNN. January 23, 2009. Retrieved January 23, 2009.
- "Obama Nominee Runs Into New Lobby Rules". The Washington Post. January 23, 2009.
- "Promises, Promises: No lobbyists at WH, except ..." Associated Press. February 2, 2009. Archived from the original on February 5, 2009. Retrieved February 3, 2009.
- Hodge, Amanda (February 19, 2009). "Obama launches Afghanistan surge". The Australian. (If you receive a 403 "forbidden" error using the previous link, try "Obama launches Afghanistan surge")
- "Gates: More Troops For Afghanistan". The New York Post. January 27, 2009.
- "Obama outlines Iraq pullout plan". BBC News. February 27, 2009. Retrieved January 4, 2010.
- "Obama's first day: Pay freeze, lobbying rules". MSNBC. January 21, 2009. Retrieved February 1, 2009.
- Kravitz, Derek (January 22, 2009). "Adding Up the White House Pay Freeze". The Washington Post. The Washington Post Company. Retrieved February 1, 2009.
- "Obama: 'I am a New Democrat'".
- Londoño, Ernesto (August 19, 2010). "Operation Iraqi Freedom ends as last combat soldiers leave Baghdad". The Washington Post.
- "Obama's full speech: 'Operation Iraqi Freedom is over'". MSNBC. August 31, 2010. Retrieved October 23, 2010.
- Al Jazeera and agencies (August 19, 2010). "Last US combat brigade leaves Iraq". Al Jazeera and agencies. Retrieved August 19, 2010.
The 4th SBCT, 2ID left Baghdad and drove the entire distance to the Kuwaiti border in the same footprints that 3rd ID made during the invasion known as the "Race for Baghdad". I was one of those people driving out. We faced intense heat, the very real threat of the "final strike" against us and the possibility of breaking down in unsecured areas with very little support and the only combat power was what we brought with us. I crossed the border at 0548 in the morning and doing such, helped bring this war to an end, officially.
- Baker, Peter (December 5, 2009). "How Obama Came to Plan for 'Surge' in Afghanistan". The New York Times. Retrieved March 16, 2015.
- "Anti-war Leaders Blast Escalation of Afghanistan War". Fight Back! News. December 1, 2009.
- "Obama's Afghanistan decision evokes LBJ's 1965 order on Vietnam buildup".
- Carl Hulse; David M. Herzenhorn (December 20, 2010). "111th Congress – One for the History Books". The New York Times.
- David A. Fahrenthold; Philip Rucker; Felicia Sonmez (December 23, 2010). "Stormy 111th Congress was still the most productive in decades". The Washington Post.
- Lisa Lerer; Laura Litvan (December 22, 2010). "No Congress Since '60s Makes as Much Law as 111th Affecting Most Americans". Bloomberg News.
- Guy Raz (December 26, 2010). "This Congress Did A Lot, But What's Next?". NPR.
- Stein, Sam (May 6, 2012). "Joe Biden Tells 'Meet The Press' He's 'Comfortable' With Marriage Equality". The Huffington Post. Retrieved August 20, 2012.
- See "Moving America Forward 2012 Democratic National Platform".
- Kane, Paul; Rucker, Philip; Farenthold, David A. (April 8, 2011). "Government shutdown averted: Congress agrees to budget deal, stopgap funding". The Washington Post. Retrieved July 14, 2012.
- Roger L. Ray (2016). Progressive Conversations: Essays on Matters of Social Justice for Critical Thinkers. Wipf and Stock. p. 124.
- Ryan Lizza. "The Great Divide: Clinton, Sanders, and the future of the Democratic Party". The New Yorker. March 21, 2016.
- Andrew McGill. "A Democratic Primary That's 2008 All Over Again?". The Atlantic. May 25, 2016.
- Max Ehrenfreund. "How Hillary Clinton's positions have changed as she's run against Bernie Sanders". The Washington Post. April 29, 2016.
- "S.1804 - Medicare for All Act of 2017".
- "H.R.676 - Expanded & Improved Medicare For All Act".
- Sneed, Tierney (October 17, 2016). "Obama to Take on Redistricting in Post-Presidency Project with Eric Holder". Talking Points Memo Blog. Retrieved October 27, 2016.
- "Holder launches Democratic redistricting initiative".
- Dovere, Edward-Isaac (October 17, 2016). "Obama, Holder to Lead Post-Trump Redistricting Campaign". Politico. Retrieved October 27, 2016.
- "About". NDRC official website.
- 67 "Democratic United States Congress Members Planning to Skip Inauguration".
- "What to Know About the First Lawmakers to Boycott a Presidential Inauguration".
- Dwilson, Stephanie Dube (1 March 2017). "Donald Trump's Presidential Address to Congress: List of Democrats Not Attending".
- "Platform". Justice Democrats. Retrieved January 25, 2017.
- Mic. "Cenk Uygur, Bernie Sanders staffers team up to take over the Democratic Party". Mic. Retrieved January 27, 2017.
- "Democratic Party rethink gets $20 million injection".
- Palmer, Anna (May 15, 2017). "Hillary Clinton launches new political group: 'Onward Together'". Politico. Retrieved May 17, 2017.
- "Obama All But Endorses Tom Perez Against Keith Ellison For DNC Chair".
- "Ex-President Barack Obama Orchestrated Tom Perez DNC Chair Victory".
- American National Biography (20 volumes, 1999) covers all politicians no longer alive; online and paper copies at many academic libraries. Older Dictionary of American Biography.
- Dinkin, Robert J. Campaigning in America: A History of Election Practices. (Greenwood 1989)
- Kurian, George Thomas ed. The Encyclopedia of the Democratic Party(4 vol. 2002).
- Remini, Robert V.. The House: The History of the House of Representatives (2006), extensive coverage of the party
- Schlesinger Jr., Arthur Meier ed. History of American Presidential Elections, 1789–2000 (various multivolume editions, latest is 2001). For each election includes history and selection of primary documents. Essays on some elections are reprinted in Schlesinger, The Coming to Power: Critical presidential elections in American history (1972)
- Schlesinger, Arthur Meier, Jr. ed. History of U.S. Political Parties (1973) multivolume
- Shafer, Byron E. and Anthony J. Badger, eds. Contesting Democracy: Substance and Structure in American Political History, 1775–2000 (2001), most recent collection of new essays by specialists on each time period:
- includes: "State Development in the Early Republic: 1775–1840" by Ronald P. Formisano; "The Nationalization and Racialization of American Politics: 1790–1840" by David Waldstreicher; "'To One or Another of These Parties Every Man Belongs;": 1820–1865 by Joel H. Silbey; "Change and Continuity in the Party Period: 1835–1885" by Michael F. Holt; "The Transformation of American Politics: 1865–1910" by Peter H. Argersinger; "Democracy, Republicanism, and Efficiency: 1885–1930" by Richard Jensen; "The Limits of Federal Power and Social Policy: 1910–1955" by Anthony J. Badger; "The Rise of Rights and Rights Consciousness: 1930–1980" by James T. Patterson, Brown University; and "Economic Growth, Issue Evolution, and Divided Government: 1955–2000" by Byron E. Shafer
- Oldaker, Nikki, Samuel Tilden the Real 19th President (2006)
- Allen, Oliver E. The Tiger: The Rise and Fall of Tammany Hall (1993)
- Baker, Jean. Affairs of Party: The Political Culture of Northern Democrats in the Mid-Nineteenth Century (1983).
- Cole, Donald B. Martin Van Buren And The American Political System (1984)
- Bass, Herbert J. "I Am a Democrat": The Political Career of David B. Hill 1961.
- Craig, Douglas B. After Wilson: The Struggle for the Democratic Party, 1920–1934 (1992)
- Earle, Jonathan H. Jacksonian Antislavery and the Politics of Free Soil, 1824–1854 (2004)
- Eyal, Yonatan. The Young America Movement and the Transformation of the Democratic Party, 1828–1861 (2007) 252 pp.
- Flick, Alexander C. Samuel Jones Tilden: A Study in Political Sagacity 1939.
- Formisano, Ronald P. The Transformation of Political Culture: Massachusetts Parties, 1790s–1840s (1983)
- Gammon, Samuel Rhea. The Presidential Campaign of 1832 (1922)
- Hammond, Bray. Banks and Politics in America from the Revolution to the Civil War (1960), Pulitzer prize. Pro-Bank
- Jensen, Richard. Grass Roots Politics: Parties, Issues, and Voters, 1854–1983 (1983)
- Keller, Morton. Affairs of State: Public Life in Late Nineteenth Century America 1977.
- Kleppner, Paul et al. The Evolution of American Electoral Systems (1983), essays, 1790s to 1980s.
- Kleppner, Paul. The Third Electoral System 1853–1892: Parties, Voters, and Political Cultures (1979), analysis of voting behavior, with emphasis on region, ethnicity, religion and class.
- McCormick, Richard P. The Second American Party System: Party Formation in the Jacksonian Era (1966)
- Merrill, Horace Samuel. Bourbon Democracy of the Middle West, 1865–1896 1953.
- Nevins, Allan. Grover Cleveland: A Study in Courage 1934. Pulitzer Prize
- Remini, Robert V. Martin Van Buren and the Making of the Democratic Party (1959)
- Rhodes, James Ford. The History of the United States from the Compromise of 1850 8 vol (1932)
- Sanders, Elizabeth. Roots of Reform: Farmers, Workers, and the American State, 1877–1917 (1999). argues the Democrats were the true progressives and GOP was mostly conservative
- Sarasohn, David. The Party of Reform: Democrats in the Progressive Era (1989), covers 1910–1930.
- Sharp, James Roger. The Jacksonians Versus the Banks: Politics in the States after the Panic of 1837 (1970)
- Silbey, Joel H. A Respectable Minority: The Democratic Party in the Civil War Era, 1860–1868 (1977)
- Silbey, Joel H. The American Political Nation, 1838–1893 (1991)
- Stampp, Kenneth M. Indiana Politics during the Civil War (1949)
- Welch, Richard E. The Presidencies of Grover Cleveland 1988.
- Whicher, George F. William Jennings Bryan and the Campaign of 1896 (1953), primary and secondary sources.
- Wilentz, Sean. The Rise of American Democracy: Jefferson to Lincoln (2005), highly detailed synthesis.
- Woodward, C. Vann. Origins of the New South, 1877–1913 1951. online edition at ACLS History ebooks
- Allswang, John M. New Deal and American Politics (1970)
- Andersen, Kristi. The Creation of a Democratic Majority, 1928–1936 (1979)
- Barone, Michael. The Almanac of American Politics 2016: The Senators, the Representatives and the Governors: Their Records and Election Results, Their States and Districts (2015), massive compilation covers all the live politicians; published every two years since 1976.
- Burns, James MacGregor. Roosevelt: The Lion and the Fox (1956)
- Cantril, Hadley and Mildred Strunk, eds. Public Opinion, 1935–1946 (1951), compilation of public opinion polls from US and elsewhere.
- Crotty, William J. Winning the presidency 2008 (Routledge, 2015).
- Dallek, Robert. Lyndon B. Johnson: Portrait of a President (2004)
- Fraser, Steve, and Gary Gerstle, eds. The Rise and Fall of the New Deal Order, 1930–1980 (1990), essays.
- Hamby, Alonzo. Liberalism and Its Challengers: From F.D.R. to Bush (1992).
- Jensen, Richard. Grass Roots Politics: Parties, Issues, and Voters, 1854–1983 (1983)
- Jensen, Richard. "The Last Party System, 1932–1980," in Paul Kleppner, ed. Evolution of American Electoral Systems (1981)
- Judis, John B. and Ruy Teixeira. The Emerging Democratic Majority (2004) demography is destiny
- "Movement Interruptus: September 11 Slowed the Democratic Trend That We Predicted, but the Coalition We Foresaw Is Still Taking Shape" The American Prospect Vol 16. Issue: 1. January 2005.
- Kennedy, David M. Freedom from Fear: The American People in Depression and War, 1929–1945 (2001), synthesis
- Kleppner, Paul et al. The Evolution of American Electoral Systems (1983), essays, 1790s to 1980s.
- Ladd Jr., Everett Carll with Charles D. Hadley. Transformations of the American Party System: Political Coalitions from the New Deal to the 1970s 2nd ed. (1978).
- Lamis, Alexander P. ed. Southern Politics in the 1990s (1999)
- Martin, John Bartlow. Adlai Stevenson of Illinois: The Life of Adlai E. Stevenson (1976),
- Moscow, Warren. The Last of the Big-Time Bosses: The Life and Times of Carmine de Sapio and the Rise and Fall of Tammany Hall (1971)
- Panagopoulos, Costas, ed. Strategy, Money and Technology in the 2008 Presidential Election (Routledge, 2014).
- Patterson, James T. Grand Expectations: The United States, 1945–1974 (1997) synthesis.
- Patterson, James T. Restless Giant: The United States from Watergate to Bush vs. Gore (2005) synthesis.
- Patterson, James. Congressional Conservatism and the New Deal: The Growth of the Conservative Coalition in Congress, 1933–39 (1967)
- Plotke, David. Building a Democratic Political Order: Reshaping American Liberalism in the 1930s and 1940s (1996).
- Nicol C. Rae; Southern Democrats Oxford University Press. 1994
- Sabato, Larry J. Divided States of America: The Slash and Burn Politics of the 2004 Presidential Election (2005), analytic.
- Sabato, Larry J. and Bruce Larson. The Party's Just Begun: Shaping Political Parties for America's Future (2001), textbook.
- Shafer, Byron E. Quiet Revolution: The Struggle for the Democratic Party and the Shaping of Post-Reform Politics (1983)
- Shelley II, Mack C. The Permanent Majority: The Conservative Coalition in the United States Congress (1983)
- Sundquist, James L. Dynamics of the Party System: Alignment and Realignment of Political Parties in the United States (1983)
- Ling, Peter J. The Democratic Party: A Photographic History (2003).
- Rutland, Robert Allen. The Democrats: From Jefferson to Clinton (1995).
- Schlisinger, Galbraith. Of the People: The 200 Year History of the Democratic Party (1992)
- Taylor, Jeff. Where Did the Party Go?: William Jennings Bryan, Hubert Humphrey, and the Jeffersonian Legacy (2006), for history and ideology of the party.
- Witcover, Jules. Party of the People: A History of the Democrats (2003)
- Schlesinger, Arthur Meier Jr. ed. History of American Presidential Elections, 1789–2000 (various multivolume editions, latest is 2001). For each election includes history and selection of primary documents.
- The Digital Book Index includes some newspapers for the main events of the 1850s, proceedings of state conventions (1850–1900), and proceedings of the Democratic National Conventions. Other references of the proceedings can be found in the linked article years on the List of Democratic National Conventions.
- Bartlett, Bruce (2008). Wrong on Race: The Democratic Party's Buried Past. New York: Palgrave MacMillan. Retrieved August 4, 2015.
- Michael Todd Landis. "Dinesh D’Souza Claims in a New Film that the Democratic Party Was Pro-Slavery. Here's the Sad Truth" (March 13, 2016). History News Network.
|Wikiquote has quotations related to: Democratic Party (United States)|
- Campaign text books
The national committees of major parties published a "campaign textbook" every presidential election from about 1856 to about 1932. They were designed for speakers and contain statistics, speeches, summaries of legislation, and documents, with plenty of argumentation. Only large academic libraries have them, but some are online:
- Address to the Democratic Republican Electors of the State of New York (1840). Published before the formation of party national committees.
- The Campaign Text Book: Why the People Want a Change. The Republican Party Reviewed... (1876)
- The Campaign Book of the Democratic Party (1882) I HDFHKKL
- The Political Reformation of 1884: A Democratic Campaign Book
- The Campaign Text Book of the Democratic Party of the United States, for the Presidential Election of 1888
- The Campaign Text Book of the Democratic Party for the Presidential Election of 1892
- Democratic Campaign Book. Presidential Election of 1896
|
Cite This PageGeneral citation information is provided here. Be sure to check the formatting, including capitalization, for the method you are using and update your citation, as needed.
Last edit date: 2018-03-02
This is an example of an advanced first-time programming project. You will learn the basics of how digital devices can represent numbers using only 0's and 1's. You will be writing a simple program to convert numbers between binary, decimal and hexadecimal notation.
It is assumed that you know how to use a text editor to write an HTML file containing your program, and that you know how to load and run the file in your browser.
Decimal and Digital: Base 10 and Base 2
You've been using decimal (base 10) numbers all your life, so by now they are second nature to you. You probably know that there are other ways to represent numbers. For example, computers and other digital devices store everything using binary (base 2) numbers, that is, numbers made up of only 0's and 1's. This is because it is much easier to distinguish between binary levels (on/off) than between the ten different levels (0–9) it would take to encode decimal numbers.
So how can you represent numbers using only 0's and 1's? Well, it's the same principle as for decimal numbers. Each digit represents a place value. For decimal (base 10) numbers, the place values are increasing powers of ten, as shown in the table below.
|Decimal Representation of 3,897|
|place value (words)||thousands||hundreds||tens||ones|
|which is the same as||10×10×10||10×10||10||1|
|digit indicates how many N||3||8||9||7|
Quick tutorial on exponential notation (you can skip this if you know it already).Exponential notation is a compact way of expressing multiples of a number. For example, as you saw in the table:
An equivalent statement in words is: Ten to the third power equals ten times ten times ten which is also equal to one thousand.
The exponent (3, in this case), tells how many times to multiply the base (10, in this case). Any number raised to the 0th power is equal to 1. That's it!
For binary (base 2) numbers it is similar, but now there are only two digits available (0 and 1), and the place values are increasing powers of two. The next table shows the same number (3,897) in binary representation. Obviously, it takes a lot more digits than in base ten!
|Binary Representation of 3,897 is: 1111 0011 1001|
|digit indicates how many N||1||1||1||1||0||0||1||1||1||0||0||1|
Check it out for yourself and make sure that it all adds up!
Bits and Bytes
Each binary digit is called a bit. Use the table above to see how many bits it takes to represent 3,897. Go ahead, we'll wait.
That's right, twelve bits. The right-most bit (the "one" bit) is the least significant bit and the left-most bit (the "two-thousand forty-eight" bit) is the most significant bit. Since it is easy to lose your place when reading a long string of 0's and 1's, binary numbers are often displayed in 4-bit groups, like this: 1111 0011 1001.
Two of these 4-bit groups (eight bits altogether) make a byte, which is the basic unit for measuring the size of a digital memory or storage device. What can you store in one byte? Think about how many different numbers can be encoded with one byte. (It may help to make an analogy with decimal numbers first: how many different numbers can you encode with 2 digits? how many with 3 digits?)
A kilobyte (abbreviated kB) is not 1000 bytes, as you might think if you know your metric prefixes. Since this is the digital world, everything is in powers of 2, so a kilobyte is 1024 bytes ( = 210 bytes). Similarly, a megabyte (MB) is 1024×1024 bytes ( = 1,048,576 bytes = 220 bytes).
Hexadecimal: Base 16
Binary numbers are fine for computers, but are not particularly handy for people to use. However, when you are programming a machine that works with binary numbers, it is sometimes handy to work with numbers represented as powers of 2 instead of powers of 10. Hexadecimal notation (base 16) is often used for this reason. As you would expect, in hexadecimal there are 16 digits, and each digit represents a power of 16. We use decimal digits for 0–9, and the first six letters of the alphabet, A–F, for the hexadecimal digits corresponding to decimal numbers 10–15. The table below shows the first 16 numbers (starting with zero) in binary, hexadecimal and decimal notation.
|Binary, Hexadecimal and Decimal Numbers|
Since 16 = 24, it is not surprising to see that one hexadecimal digit corresponds to 4 bits. Can you see what the hexadecimal representation for 3,897 would be?
Another interesting fact that you may have noticed is that the least significant bit of the binary representation indicates whether the number is odd or even. Error correction code algorithms for transmitting and receiving data (used in everything from NASA satellite transmissions to cell phones to CD players) often make use of this property.
An easy way to have your input elements aligned neatly is to use an HTML <TABLE> element (an example is shown below). The table has two columns. Each input is a separate row in the table. The labels are in the first column, and the input fields are in the second column. The labels are aligned at the right-hand edge of their column, and the input fields are aligned at the left-hand edge of their column. The HTML code for this sample table is also shown (beneath the table itself).
<TABLE style="text-align:center; border-style:solid; cellpadding:10%" >
<TH COLSPAN="2">Binary/Decimal/Hexadecimal Converter Interface Example</TH>
<TD style="text-align:right;"><B>Binary: </B></TD>
<TD style="text-align:left;"><INPUT NAME="bin" VALUE="0" "readonly" onChange="ConvertBin(form)" SIZE=10></INPUT></TD>
<TD style="text-align:left;"><INPUT NAME="dec" VALUE="0" "readonly" onChange="ConvertDec(form)" SIZE=10></INPUT></TD>
<TD style="text-align:left;"><INPUT NAME="hex" VALUE="0" "readonly" onChange="ConvertHex(form)" SIZE=10></INPUT></TD>
The first entry in the Bibliography is an excellent reference for information on writing a base conversion algorithm. Combined with the material available here, you should be able to put together a binary/decimal/hexadecimal converter.
By the way, so you can check your answers to the questions from earlier in the Introduction:
- One byte can encode 256 (28) unique values.
- The hexadecimal representation for 3,897 is "F39". Each hexadecimal digit corresponds to 4 bits of the binary representation: 1111 0011 1001.
Terms and ConceptsTo do this project, you should understand the following terms and concepts (do background research to fill in any gaps in your knowledge):
- binary (base 2),
- decimal (base 10),
- hexadecimal (base 16),
- bit, byte, kilobyte (kB), megabtye (MB),
- radix (number base).
- Basic HTML concepts:
- start tags and end tags,
- the <HEAD> section,
- the <SCRIPT> section,
- the <BODY> section,
- the <FORM> section,
- the <INPUT> tag,
- the <TABLE> tag, for aligning your s neatly.
- General programming concepts:
- reserved words,
- control statements (e.g., "if...else" statements, "for" and "while" loops)
- arithmetic operators: plus, minus, times, divide, modulus (+, -, *, /, %),
- assignment operators: e.g., =,+=,-=,*=,/=,
- comparison operators: e.g., <,>,<=,>=,==,
- logical operators: AND, OR and NOT, (&&,||,!).
- Using the binary representation of integers described in the Introduction, how many unique integers can be represented with 8 bits? With 12 bits? With n bits, where n is a positive integer?
- Using the hexadecimal representation of integers described in the Introduction, how many unique integers can be represented with 2 hexadecimal digits? With 3 hexadecimal digits? With n hexadecimal digits, where n is a positive integer?
A. Bogomolny. (n.d.). Implementation of Base Conversion Algorithms from Interactive Mathematics Miscellany and Puzzles. Retrieved March 17, 2014, from http://www.cut-the-knot.org/recurrence/conversion.shtml
- Introduction to Programming by Matt Gemmell describes what programming actually is:
Gemmell, M. (2007). Introduction to Programming. Dean's Director Tutorials & Resources. Retrieved March 14, 2014, from http://www.deansdirectortutorials.com/Lingo/IntroductionToProgramming.pdf
- HTML Forms reference:
W3C. (1999). Forms in HTML Documents, HTML 4.01 Specification. World Wide Web Consortium: Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University. Retrieved June 6, 2006, from http://www.w3.org/TR/REC-html40/interact/forms.html
- HTML Tables reference:
W3C. (1999). Tables in HTML Documents. World Wide Web Consortium: Massachusetts Institute of Technology, Institut National de Recherche en Informatique et en Automatique, Keio University. Retrieved June 6, 2006, from http://www.w3.org/TR/REC-html40/struct/tables.html
- If you get interested and start doing a lot of web development, you may want to use a text editor that is a little more sophisticated than Notepad. An editor designed for web development and programming can help with formatting so that your code is more readable, but still produce plain text files. This type of editor can also fill in web-specific HTML coding details and do "syntax highlighting" (e.g., automatic color-coding of HTML), which can help you find errors. To find one, search online for "free HTML editor."
News Feed on This Topic
Materials and Equipment
- Computer with web browser
- Text-editing program like Notepad, or a more advanced editor of your choice
Remember Your Display Board Supplies
Poster Making Kit
ArtSkills Trifold with Header
- The program should have separate input fields for each type of number. Remember that binary representations generally use more digits than decimal or hexadecimal representations, so adjust the sizes of your input fields accordingly.
- Use the input field's onChange event so that when the user enters a number in one of the fields, the other two are updated with the new number, converted to the appropriate base.
- There are several sub-tasks here. Take the sub-tasks one at a time, and gradually build up the capabilities of your program.
- Verify that the code for each sub-task is working properly before moving on to the next sub-task.
- Test your program by verifying conversions from each type of input.
- Plan your work.
- Methodically think through all of the steps to solve your programming problem.
- Try to parcel the tasks out into short, manageable functions.
- Think about the interface for each function: what arguments need to be passed to the function so that it can do its job?
- Use careful naming, good formatting, and descriptive comments to make your code
- Give your functions and variables names that reflect their purpose in the program. A good choice of names makes the code more readable.
- Indent the statements in the body of a function so it is clear where the function code begins and ends.
- Indent the statements following an "if", "else", "for", or "while" control statement. That way you can easily see what statements are executed for a given branch or loop in your code.
- Descriptive comments are like notes to yourself. Oftentimes in programming, you'll run into a problem similar to one you've solved before. If your code is well-commented, it will make it easier to go back and re-use pieces of it later. Your comments will help you remember how you solved the previous problem.
- Work incrementally.
- When you are creating a program, it is almost inevitable that along the way you will also create bugs. Bugs are mistakes in your code that either cause your program to behave in ways that you did not intend, or cause it to stop working altogether. The more lines of code, the more chances for bugs. So, especially when you are first starting, it is important to work incrementally. Make just one change at a time, make sure that it works as expected and then move on.
- Test to make sure that your code works as expected. If your code has branch points that depend on user input, make sure that you test each of the possible branch points to make sure that there are no surprises.
- Also, it's a good idea to backup your file once in awhile with a different name. That way, if something goes really wrong and you can't figure it out, you don't need to start over from scratch. Instead you can go back to an earlier version that worked, and start over from there.
- As you gain more experience with a particular programming environment, you'll be able to write larger chunks of code at one time. Even then, it is important to remember to test each new section of code to make sure that it works as expected before moving on to the next piece. By getting in the habit of working incrementally, you'll reduce the amount of time you spend identifying and fixing bugs.
- When debugging, work methodically to isolate the problem.
- We told you above that bugs are inevitable, so how do you troubleshoot them? Well, the first step is to isolate the problem: which line caused the program to stop working as expected? If you are following the previous tip and working incrementally, you can be pretty sure that the problem is with the line you just wrote.
- Avoid using reserved words as variable or function names.
- Test your program thoroughly.
- You should test the program incrementally as you write it, but you should also test the completed program to make sure that it behaves as expected.
Communicating Your Results: Start Planning Your Display BoardCreate an award-winning display board with tips and design ideas from the experts at ArtSkills.
Keep the fun going! Find local opportunities related to this project.Register on ActivityHero
If you like this project, you might enjoy exploring these related careers:
Computer ProgrammerComputers are essential tools in the modern world, handling everything from traffic control, car welding, movie animation, shipping, aircraft design, and social networking to book publishing, business management, music mixing, health care, agriculture, and online shopping. Computer programmers are the people who write the instructions that tell computers what to do. Read more
Computer Software EngineerAre you interested in developing cool video game software for computers? Would you like to learn how to make software run faster and more reliably on different kinds of computers and operating systems? Do you like to apply your computer science skills to solve problems? If so, then you might be interested in the career of a computer software engineer. Read more
Software Quality Assurance Engineer & TesterSoftware quality assurance engineers and testers oversee the quality of a piece of software's development over its entire life cycle. Their goal is to see to it that the final product meets the customer's requirements and expectations in both performance and value. During the software life cycle, they verify (officially state) that it is possible for the software to accomplish certain tasks. They detect problems that exist in the process of developing the software, or in the product itself. They try and make things not work (try to "break" the software) by creating errors or combinations of errors that a user might make. For example, if a user enters a period or a pound sign for a password, will that break the software? They seek to anticipate potential issues with the software before they become visible. At the end of the life cycle, they reflect upon how problems or bugs arose, and figure out ways to make the software development process better in the future. Read more
- Generalize your converter to handle base 2 to base 36 (when the alphabet runs out of letters for more digits).
- The converter you wrote is designed for unsigned integers (the positive integers plus zero). What about negative numbers? For a more advanced project, do research to learn how negative numbers are represented in binary notation and extend your converter to handle negative numbers.
- What about rational numbers? For a much more advanced project, do research to learn how "floating point" numbers are represented in binary notation and extend your converter to handle these numbers.
Ask an ExpertThe Ask an Expert Forum is intended to be a place where students can go to find answers to science questions that they have been unable to find using other resources. If you have specific questions about your science fair project or science fair, our team of volunteer scientists can help. Our Experts won't do the work for you, but they will make suggestions, offer guidance, and help you troubleshoot.
Ask an Expert
News Feed on This Topic
Looking for more science fun?
Try one of our science activities for quick, anytime science explorations. The perfect thing to liven up a rainy day, school vacation, or moment of boredom.Find an Activity
|
Mathematics is taught for 50 minutes to an hour daily, in each class, following the Lancashire and White Rose suggested scheme of works.
In maths, we aim to provide children with the basic skills needed for adult life and for the world of work. They are encouraged to think logically and clearly, to estimate sensibly, to solve problems and to use their mental arithmetic skills ensuring that learning in mathematics is set in real-life contexts to inspire and enthuse the children.
We also aim to encourage an enjoyment of the subject and an awareness of the excitement that can be found as they investigate mathematics practically.
The framework for teaching Mathematics is based on the new National Curriculum (2014) and covers Number - Number and place value, Number - Addition and Subtraction, Number - Multiplication and Division, Number - Fractions and Decimals, Geometry - Shape, Geometry - Position and Direction, Measurement, Ratio and Proportion, Statistics and Algebra.
These key learning grids can be used to provide:
The underlined statements on the grids have been identified as Key Learning Indicators of Performance (KLIPs) as these have the greatest impact on the further development of skills and subsequent learning. Consequently, the Key Learning Indicators of Performance (KLIPs) play a particularly significant role in the assessment process.
Click the link below to view the assessment tool used for making judgements on your children's attainment.
Click below to view progression in calculation documents
|
Lattice Multiplication Worksheets 2 By 2. In actual fact, our printable multiplication tests consists of lattice. Lattice multiplication is a method of multiplying numbers using a grid.
Worksheets are long multiplication lattice method work, lattice multiplication a, grade 3. Worksheets are name 2 digit by 2 digit lattice lattice multiplication, lattice multiplication 3 digit by 2. Learning different methods of multiplying double digit numbers can help.
Lattice Multiplication Templates Help Teachers, Homeschooling Moms And Tutors To Create Their Own Worksheets.
7) 21 59 80 92 = 8) 80 92 63 86 = 9) 63 86 name : 10 best images of 1 4 fractions. Lattice multiplication worksheets pdf printable multiplication tests source:
Subtitles Available If You Are Not Using A.
Lattice method of multiplication with decimal numbers. Worksheet gives an example and explains the steps to solving lattice multiplication problems. 0 4 0 9 3 6 8 1 9 3 1 2 49 x 19 931 step 1.
Lattice Multiplication Worksheets Pdf Printable Multiplication Tests Source:
Lattice method work out the answers to these multiplication questions using the lattice method 29 x 18 = 522 29 28 25 17 x 16 = 27 19 x 14 = 26 15 x 14 = 24. The download button initiates a download of the pdf math. Lattice multiplication 2 digit by 2 digit 10 pages by teacher vault source:.
Lattice Multiplication Is A Method Of Multiplying Numbers Using A Grid.
Learning different methods of multiplying double digit numbers can help. Lattice 2 by 2 worksheets. Lattice method multiplication worksheets everyday math.
This Method Breaks The Multiplication Process Into Smaller Steps, Which Some Students May Find Easier.
The lattice grids are available in different sizes. This method breaks the multiplication process into smaller steps, which some students may find easier. Lattice worksheet math answer key pdf.
|
In astronomy, stellar classification is the classification of stars based on their spectral characteristics. Electromagnetic radiation from the star is analyzed by splitting it with a prism or diffraction grating into a spectrum exhibiting the rainbow of colors interspersed with spectral lines. Each line indicates a particular chemical element or molecule, with the line strength indicating the abundance of that element. The strengths of the different spectral lines vary mainly due to the temperature of the photosphere, although in some cases there are true abundance differences. The spectral class of a star is a short code primarily summarizing the ionization state, giving an objective measure of the photosphere's temperature.
Most stars are currently classified under the Morgan-Keenan (MK) system using the letters O, B, A, F, G, K, and M, a sequence from the hottest (O type) to the coolest (M type). Each letter class is then subdivided using a numeric digit with 0 being hottest and 9 being coolest (e.g. A8, A9, F0, and F1 form a sequence from hotter to cooler). The sequence has been expanded with classes for other stars and star-like objects that do not fit in the classical system, such as class D for white dwarfs and classes S and C for carbon stars.
In the MK system, a luminosity class is added to the spectral class using Roman numerals. This is based on the width of certain absorption lines in the star's spectrum, which vary with the density of the atmosphere and so distinguish giant stars from dwarfs. Luminosity class 0 or Ia+ is used for hypergiants, class I for supergiants, class II for bright giants, class III for regular giants, class IV for sub-giants, class V for main-sequence stars, class sd (or VI) for sub-dwarfs, and class D (or VII) for white dwarfs. The full spectral class for the Sun is then G2V, indicating a main-sequence star with a temperature around 5,800 K.
Conventional color description
The conventional color description takes into account only the peak of the stellar spectrum. In actuality, however, stars radiate in all parts of the spectrum. Because all spectral colors combined appear white, the actual apparent colors the human eye would observe are far lighter than the conventional color descriptions would suggest. This characteristic of 'lightness' indicates that the simplified assignment of colors within the spectrum can be misleading. Excluding color-contrast illusions in dim light, there are no green, indigo, or violet stars. Red dwarfs are a deep shade of orange, and brown dwarfs do not literally appear brown, but hypothetically would appear dim grey to a nearby observer.
The modern classification system is known as the Morgan–Keenan (MK) classification. Each star is assigned a spectral class from the older Harvard spectral classification and a luminosity class using Roman numerals as explained below, forming the star's spectral type.
Other modern stellar classification systems, such as the UBV system, are based on color indexes—the measured differences in three or more color magnitudes. Those numbers are given labels such as "U-V" or "B-V", which represent the colors passed by two standard filters (e.g. Ultraviolet, Blue and Visual).
Harvard spectral classification
The Harvard system is a one-dimensional classification scheme by astronomer Annie Jump Cannon, who re-ordered and simplified a prior alphabetical system. Stars are grouped according to their spectral characteristics by single letters of the alphabet, optionally with numeric subdivisions. Main-sequence stars vary in surface temperature from approximately 2,000 to 50,000 K, whereas more-evolved stars can have temperatures above 100,000 K. Physically, the classes indicate the temperature of the star's atmosphere and are normally listed from hottest to coldest.
|Class||Effective temperature||Vega-relative chromaticity[nb 1]||Chromaticity (D65)[nb 2]||Main-sequence mass
|Fraction of all|
|O||≥ 30,000 K||blue||blue||≥ 16 M☉||≥ 6.6 R☉||≥ 30,000 L☉||Weak||~0.00003%|
|B||10,000–30,000 K||blue white||deep blue white||2.1–16 M☉||1.8–6.6 R☉||25–30,000 L☉||Medium||0.13%|
|A||7,500–10,000 K||white||blue white||1.4–2.1 M☉||1.4–1.8 R☉||5–25 L☉||Strong||0.6%|
|F||6,000–7,500 K||yellow white||white||1.04–1.4 M☉||1.15–1.4 R☉||1.5–5 L☉||Medium||3%|
|G||5,200–6,000 K||yellow||yellowish white||0.8–1.04 M☉||0.96–1.15 R☉||0.6–1.5 L☉||Weak||7.6%|
|K||3,700–5,200 K||light orange||pale yellow orange||0.45–0.8 M☉||0.7–0.96 R☉||0.08–0.6 L☉||Very weak||12.1%|
|M||2,400–3,700 K||orange red||light orange red||0.08–0.45 M☉||≤ 0.7 R☉||≤ 0.08 L☉||Very weak||76.45%|
The spectral classes O through M, as well as other more specialized classes discussed later, are subdivided by Arabic numerals (0–9), where 0 denotes the hottest stars of a given class. For example, A0 denotes the hottest stars in class A and A9 denotes the coolest ones. Fractional numbers are allowed; for example, the star Mu Normae is classified as O9.7. The Sun is classified as G2.
Conventional color descriptions are traditional in astronomy, and represent colors relative to the mean color of an A class star, which is considered to be white. The apparent color descriptions are what the observer would see if trying to describe the stars under a dark sky without aid to the eye, or with binoculars. However, most stars in the sky, except the brightest ones, appear white or bluish white to the unaided eye because they are too dim for color vision to work. Red supergiants are cooler and redder than dwarfs of the same spectral type, and stars with particular spectral features such as carbon stars may be far redder than any black body.
The fact that the Harvard classification of a star indicated its surface or photospheric temperature (or more precisely, its effective temperature) was not fully understood until after its development, though by the time the first Hertzsprung–Russell diagram was formulated (by 1914), this was generally suspected to be true. In the 1920s, the Indian physicist Meghnad Saha derived a theory of ionization by extending well-known ideas in physical chemistry pertaining to the dissociation of molecules to the ionization of atoms. First he applied it to the solar chromosphere, then to stellar spectra.
Harvard astronomer Cecilia Payne then demonstrated that the O-B-A-F-G-K-M spectral sequence is actually a sequence in temperature. Because the classification sequence predates our understanding that it is a temperature sequence, the placement of a spectrum into a given subtype, such as B3 or A7, depends upon (largely subjective) estimates of the strengths of absorption features in stellar spectra. As a result, these subtypes are not evenly divided into any sort of mathematically representable intervals.
Yerkes spectral classification
The Yerkes spectral classification, also called the MKK system from the authors' initials, is a system of stellar spectral classification introduced in 1943 by William Wilson Morgan, Philip C. Keenan, and Edith Kellman from Yerkes Observatory. This two-dimensional (temperature and luminosity) classification scheme is based on spectral lines sensitive to stellar temperature and surface gravity, which is related to luminosity (whilst the Harvard classification is based on just surface temperature). Later, in 1953, after some revisions of list of standard stars and classification criteria, the scheme was named the Morgan–Keenan classification, or MK, and this system remains in use.
Denser stars with higher surface gravity exhibit greater pressure broadening of spectral lines. The gravity, and hence the pressure, on the surface of a giant star is much lower than for a dwarf star because the radius of the giant is much greater than a dwarf of similar mass. Therefore, differences in the spectrum can be interpreted as luminosity effects and a luminosity class can be assigned purely from examination of the spectrum.
A number of different luminosity classes are distinguished, as listed in the table below.
|0 or Ia+||hypergiants or extremely luminous supergiants||Cygnus OB2#12 – B3-4Ia+ |
|Ia||luminous supergiants||Eta Canis Majoris – B5Ia |
|Iab||intermediate-size luminous supergiants||Gamma Cygni – F8Iab |
|Ib||less luminous supergiants||Zeta Persei – B1Ib |
|II||bright giants||Beta Leporis – G0II |
|III||normal giants||Arcturus – K0III |
|IV||subgiants||Gamma Cassiopeiae – B0.5IVpe |
|V||main-sequence stars (dwarfs)||Achernar – B6Vep |
|sd (prefix) or VI||subdwarfs||HD 149382 – sdB5 or B5VI |
|D (prefix) or VII||white dwarfs [nb 3]||van Maanen 2 – DZ8 |
Marginal cases are allowed; for example, a star may be either a supergiant or a bright giant, or may be in between the subgiant and main-sequence classifications. In these cases, two special symbols are used:
- A slash (/) means that a star is either one class or the other.
- A dash (-) means that the star is in between the two classes.
For example, a star classified as A3-4III/IV would be in between spectral types A3 and A4, while being either a giant star or a subgiant.
Sub-dwarf classes have also been used: VI for sub-dwarfs (stars slightly less luminous than the main sequence).
Nominal luminosity class VII (and sometimes higher numerals) is now rarely used for white dwarf or "hot sub-dwarf" classes, since the temperature-letters of the main sequence and giant stars no longer apply to white dwarfs.
Occasionally, letters a and b are applied to luminosity classes other than supergiants; for example, a giant star slightly more luminous than typical may be given a luminosity class of IIIb.
Additional nomenclature, in the form of lower-case letters, can follow the spectral type to indicate peculiar features of the spectrum.
|Code||Spectral peculiarities for stars|
|:||uncertain spectral value|
|...||Undescribed spectral peculiarities exist|
|e||Emission lines present|
|[e]||"Forbidden" emission lines present|
|er||"Reversed" center of emission lines weaker than edges|
|eq||Emission lines with P Cygni profile|
|f||N III and He II emission|
|f*||N IV λ4058Å is stronger than the N III λ4634Å, λ4640Å, & λ4642Å lines|
|f+||Si IV λ4089Å & λ4116Å are emitted, in addition to the N III line|
|(f)||N III emission, absence or weak absorption of He II|
|((f))||Displays strong He II absorption accompanied by weak N III emissions|
|h||WR stars with hydrogen emission lines.|
|ha||WR stars with hydrogen seen in both absorption and emission.|
|He wk||Weak Helium lines|
|k||Spectra with interstellar absorption features|
|m||Enhanced metal features|
|n||Broad ("nebulous") absorption due to spinning|
|nn||Very broad absorption features|
|neb||A nebula's spectrum mixed in|
|p||Unspecified peculiarity, peculiar star.[nb 4]|
|pq||Peculiar spectrum, similar to the spectra of novae|
|q||P Cygni profiles|
|s||Narrow ("sharp") absorption lines|
|ss||Very narrow lines|
|sh||Shell star features|
|var||Variable spectral feature (sometimes abbreviated to "v")|
|wl||Weak lines (also "w" & "wk")|
|Abnormally strong spectral lines of the specified element(s)|
The reason for the odd arrangement of letters in the Harvard classification is historical, having evolved from the earlier Secchi classes and been progressively modified as understanding improved.
During the 1860s and 1870s, pioneering stellar spectroscopist Angelo Secchi created the Secchi classes in order to classify observed spectra. By 1866, he had developed three classes of stellar spectra, shown in the table below.
|Class number||Secchi class description|
|Secchi class I||White and blue stars with broad heavy hydrogen lines, such as Vega and Altair. This includes the modern class A and early class F.|
|Secchi class I
|A subtype of Secchi class I with narrow lines in place of wide bands, such as Rigel and Bellatrix. In modern terms, this corresponds to early B-type stars|
|Secchi class II||Yellow stars – hydrogen less strong, but evident metallic lines, such as the Sun, Arcturus, and Capella. This includes the modern classes G and K as well as late class F.|
|Secchi class III||Orange to red stars with complex band spectra, such as Betelgeuse and Antares. |
This corresponds to the modern class M.
|Secchi class IV||In 1868, he discovered carbon stars, which he put into a distinct group: |
Red stars with significant carbon bands and lines, corresponding to modern classes C and S.
|Secchi class V||In 1877, he added a fifth class: |
Emission-line stars, such as Gamma Cassiopeiae and Sheliak, which are in modern class Be. In 1891, Edward Charles Pickering proposed that class V should correspond to the modern class O (which then included Wolf-Rayet stars) and stars within planetary nebulae.
The Roman numerals used for Secchi classes should not be confused with the completely unrelated Roman numerals used for Yerkes luminosity classes.
|I||A, B, C, D||Hydrogen lines dominant.|
|II||E, F, G, H, I, K, L|
|IV||N||Did not appear in the catalogue.|
|V||O||Included Wolf–Rayet spectra with bright lines.|
|Classes carried through into the MK system are in bold.|
In the 1880s, the astronomer Edward C. Pickering began to make a survey of stellar spectra at the Harvard College Observatory, using the objective-prism method. A first result of this work was the Draper Catalogue of Stellar Spectra, published in 1890. Williamina Fleming classified most of the spectra in this catalogue.
The catalogue used a scheme in which the previously used Secchi classes (I to V) were subdivided into more specific classes, given letters from A to P. Also, the letter Q was used for stars not fitting into any other class.
In 1897, another worker at Harvard, Antonia Maury, placed the Orion subtype of Secchi class I ahead of the remainder of Secchi class I, thus placing the modern type B ahead of the modern type A. She was the first to do so, although she did not use lettered spectral types, but rather a series of twenty-two types numbered from I to XXII.
In 1901, Annie Jump Cannon returned to the lettered types, but dropped all letters except O, B, A, F, G, K, M, and N used in that order, as well as P for planetary nebulae and Q for some peculiar spectra. She also used types such as B5A for stars halfway between types B and A, F2G for stars one-fifth of the way from F to G, and so on. Finally, by 1912, Cannon had changed the types B, A, B5A, F2G, etc. to B0, A0, B5, F2, etc. This is essentially the modern form of the Harvard classification system.
Mount Wilson classes
A luminosity classification known as the Mount Wilson system was used to distinguish between stars of different luminosities. This notation system is still sometimes seen on modern spectra.
The stellar classification system is taxonomic, based on type specimens, similar to classification of species in biology: The categories are defined by one or more standard stars for each category and sub-category, with an associated description of the distinguishing features.
"Early" and "late" nomenclature
Stars are often referred to as early or late types. "Early" is a synonym for hotter, while "late" is a synonym for cooler.
Depending on the context, "early" and "late" may be absolute or relative terms. "Early" as an absolute term would therefore refer to O or B, and possibly A stars. As a relative reference it relates to stars hotter than others, such as "early K" being perhaps K0, K1, and K3.
"Late" is used in the same way, with an unqualified use of the term indicating stars with spectral types such as K and M, but it can also be used for stars that are cool relative to other stars, as in using "late G" to refer to G7, G8, and G9.
In the relative sense, "early" means a lower Arabic numeral following the class letter, and "late" means a higher number.
This obscure terminology is a hold-over from an early 20th century model of stellar evolution, which supposed that stars were powered by gravitational contraction via the Kelvin–Helmholtz mechanism, which is now known to not apply to main sequence stars. If that were true, then stars would start their lives as very hot "early-type" stars and then gradually cool down into "late-type" stars. This mechanism provided ages of the Sun that were much smaller than what is observed in the geologic record, and was rendered obsolete by the discovery that stars are powered by nuclear fusion. The terms "early" and "late" were carried over, beyond the demise of the model they were based on.
O-type stars are very hot and extremely luminous, with most of their radiated output in the ultraviolet range. These are the rarest of all main-sequence stars. About 1 in 3,000,000 (0.00003%) of the main-sequence stars in the solar neighborhood are O-type stars.[nb 5] Some of the most massive stars lie within this spectral class. O-type stars frequently have complicated surroundings that make measurement of their spectra difficult.
O-type spectra formerly were defined by the ratio of the strength of the He II λ4541 relative to that of He I λ4471, where λ is the wavelength, measured in ångströms. Spectral type O7 was defined to be the point at which the two intensities are equal, with the He I line weakening towards earlier types. Type O3 was, by definition, the point at which said line disappears altogether, although it can be seen very faintly with modern technology. Due to this, the modern definition uses the ratio of the nitrogen line N IV λ4058 to N III λλ4634-40-42.
O-type stars have dominant lines of absorption and sometimes emission for He II lines, prominent ionized (Si IV, O III, N III, and C III) and neutral helium lines, strengthening from O5 to O9, and prominent hydrogen Balmer lines, although not as strong as in later types. Because they are so massive, O-type stars have very hot cores and burn through their hydrogen fuel very quickly, so they are the first stars to leave the main sequence.
When the MKK classification scheme was first described in 1943, the only subtypes of class O used were O5 to O9.5. The MKK scheme was extended to O9.7 in 1971 and O4 in 1978, and new classification schemes that add types O2, O3 and O3.5 have subsequently been introduced.
B-type stars are very luminous and blue. Their spectra have neutral helium lines, which are most prominent at the B2 subclass, and moderate hydrogen lines. As O- and B-type stars are so energetic, they only live for a relatively short time. Thus, due to the low probability of kinematic interaction during their lifetime, they are unable to stray far from the area in which they formed, apart from runaway stars.
The transition from class O to class B was originally defined to be the point at which the He II λ4541 disappears. However, with modern equipment, the line is still apparent in the early B-type stars. Today for main-sequence stars, the B-class is instead defined by the intensity of the He I violet spectrum, with the maximum intensity corresponding to class B2. For supergiants, lines of silicon are used instead; the Si IV λ4089 and Si III λ4552 lines are indicative of early B. At mid B, the intensity of the latter relative to that of Si II λλ4128-30 is the defining characteristic, while for late B, it is the intensity of Mg II λ4481 relative to that of He I λ4471.
These stars tend to be found in their originating OB associations, which are associated with giant molecular clouds. The Orion OB1 association occupies a large portion of a spiral arm of the Milky Way and contains many of the brighter stars of the constellation Orion. About 1 in 800 (0.125%) of the main-sequence stars in the solar neighborhood are B-type main-sequence stars.[nb 5]
Massive yet non-supergiant entities known as "Be stars" are main-sequence stars that notably have, or had at some time, one or more Balmer lines in emission, with the hydrogen-related electromagnetic radiation series projected out by the stars being of particular interest. Be stars are generally thought to feature unusually strong stellar winds, high surface temperatures, and significant attrition of stellar mass as the objects rotate at a curiously rapid rate. Objects known as "B(e)" or "B[e]" stars possess distinctive neutral or low ionisation emission lines that are considered to have 'forbidden mechanisms', undergoing processes not normally allowed under current understandings of quantum mechanics.
- B0V – Upsilon Orionis
- B0Ia – Alnilam
- B2Ia – Chi2 Orionis
- B2Ib – 9 Cephei
- B3V – Eta Ursae Majoris
- B3V – Eta Aurigae
- B3Ia – Omicron2 Canis Majoris
- B5Ia – Eta Canis Majoris
- B8Ia – Rigel
A-type stars are among the more common naked eye stars, and are white or bluish-white. They have strong hydrogen lines, at a maximum by A0, and also lines of ionized metals (Fe II, Mg II, Si II) at a maximum at A5. The presence of Ca II lines is notably strengthening by this point. About 1 in 160 (0.625%) of the main-sequence stars in the solar neighborhood are A-type stars.[nb 5]
- A0Van – Gamma Ursae Majoris
- A0Va – Vega
- A0Ib – Eta Leonis
- A0Ia – HD 21389
- A1V – Sirius A
- A2Ia – Deneb
- A3Va – Fomalhaut
F-type stars have strengthening spectral lines H and K of Ca II. Neutral metals (Fe I, Cr I) beginning to gain on ionized metal lines by late F. Their spectra are characterized by the weaker hydrogen lines and ionized metals. Their color is white. About 1 in 33 (3.03%) of the main-sequence stars in the solar neighborhood are F-type stars.[nb 5]
G-type stars, including the Sun have prominent spectral lines H and K of Ca II, which are most pronounced at G2. They have even weaker hydrogen lines than F, but along with the ionized metals, they have neutral metals. There is a prominent spike in the G band of CH molecules. Class G main-sequence stars make up about 7.5%, nearly one in thirteen, of the main-sequence stars in the solar neighborhood.[nb 5]
G is host to the "Yellow Evolutionary Void". Supergiant stars often swing between O or B (blue) and K or M (red). While they do this, they do not stay for long in the yellow supergiant G class, as this is an extremely unstable place for a supergiant to be.
- G0V – Beta Canum Venaticorum
- G0IV – Eta Boötis
- G0Ib – Beta Aquarii
- G2V – Sun
- G5V – Kappa Ceti
- G5IV – Mu Herculis
- G5Ib – 9 Pegasi
- G8V – 61 Ursae Majoris
- G8IV – Beta Aquilae
- G8IIIa – Kappa Geminorum
- G8IIIab – Epsilon Virginis
- G8Ib – Epsilon Geminorum
K-type stars are orangish stars that are slightly cooler than the Sun. They make up about 12% of the main-sequence stars in the solar neighborhood.[nb 5] There are also giant K-type stars, which range from hypergiants like RW Cephei, to giants and supergiants, such as Arcturus, whereas orange dwarfs, like Alpha Centauri B, are main-sequence stars.
They have extremely weak hydrogen lines, if they are present at all, and mostly neutral metals (Mn I, Fe I, Si I). By late K, molecular bands of titanium oxide become present. There is a suggestion that K-spectrum stars may potentially increase the chances of life developing on orbiting planets that are within the habitable zone.
- K0V – Sigma Draconis
- K0III – Pollux
- K0III – Epsilon Cygni
- K2V – Epsilon Eridani
- K2III – Kappa Ophiuchi
- K3III – Rho Boötis
- K5V – 61 Cygni A
- K5III – Gamma Draconis
Class M stars are by far the most common. About 76% of the main-sequence stars in the solar neighborhood are class M stars.[nb 5][nb 6] However, class M main-sequence stars (red dwarfs) have such low luminosities that none are bright enough to be seen with the unaided eye, unless under exceptional conditions. The brightest known M-class main-sequence star is M0V Lacaille 8760, with magnitude 6.6 (the limiting magnitude for typical naked-eye visibility under good conditions is typically quoted as 6.5), and it is extremely unlikely that any brighter examples will be found.
Although most class M stars are red dwarfs, most of the largest ever supergiant stars in the Milky Way are M stars, such as VV Cephei, Antares and Betelgeuse, which are also class M. Furthermore, the larger, hotter brown dwarfs are late class M, usually in the range of M6.5 to M9.5.
The spectrum of a class M star contains lines from oxide molecules (in the visible spectrum, especially TiO) and all neutral metals, but absorption lines of hydrogen are usually absent. TiO bands can be strong in class M stars, usually dominating their visible spectrum by about M5. Vanadium(II) oxide bands become present by late M.
Extended spectral types
A number of new spectral types have been taken into use from newly discovered types of stars.
Hot blue emission star classes
Spectra of some very hot and bluish stars exhibit marked emission lines from carbon or nitrogen, or sometimes oxygen.
Class W: Wolf–Rayet
Once included as type O stars, the Wolf-Rayet stars of class W or WR are notable for spectra lacking hydrogen lines. Instead their spectra are dominated by broad emission lines of highly ionized helium, nitrogen, carbon and sometimes oxygen. They are thought to mostly be dying supergiants with their hydrogen layers blown away by stellar winds, thereby directly exposing their hot helium shells. Class W is further divided into subclasses according to the relative strength of nitrogen and carbon emission lines in their spectra (and outer layers).
- WN – spectrum dominated by N III-V and He I-II lines
- WNE (WN2 to WN5 with some WN6) – hotter or "early"
- WNL (WN7 to WN9 with some WN6) – cooler or "late"
- Extended WN classes WN10 and WN11 sometimes used for the Ofpe/WN9 stars
- h tag used (e.g. WN9h) for WR with hydrogen emission and ha (e.g. WN6ha) for both hydrogen emission and absorption
- WN/C – WN stars plus strong C IV lines, intermediate between WN and WC stars
- WC – spectrum with strong C II-IV lines
- WCE (WC4 to WC6) – hotter or "early"
- WCL (WC7 to WC9) – cooler or "late"
- WO (WO1 to WO4) – strong O VI lines, extremely rare
Although the central stars of most planetary nebulae (CSPNe) show O type spectra, around 10% are hydrogen-deficient and show WR spectra. These are low-mass stars and to distinguish them from the massive Wolf-Rayet stars, their spectra are enclosed in square brackets: e.g. [WC]. Most of these show [WC] spectra, some [WO], and very rarely [WN].
The "Slash" stars
The slash stars are O-type stars with WN-like lines in their spectra. The name "slash" comes from their printed spectral type having a slash in it (e.g. "Of/WNL").
There is a secondary group found with this spectra, a cooler, "intermediate" group designated "Ofpe/WN9". These stars have also been referred to as WN10 or WN11, but that has become less popular with the realisation of the evolutionary difference from other Wolf–Rayet stars. Recent discoveries of even rarer stars have extended the range of slash stars as far as O2-3.5If*/WN5-7, which are even hotter than the original "slash" stars.
The magnetic O stars
They are O stars with strong magnetic fields. Designation is Of?p
Cool red and brown dwarf classes
Brown dwarfs, whose energy comes from gravitational attraction alone, cool as they age and so progress to later spectral types. Brown dwarfs start their lives with M-type spectra and will cool through the L, T, and Y spectral classes, faster the less massive they are; the highest-mass brown dwarfs cannot have cooled to Y or even T dwarfs within the age of the universe. Because this leads to an unresolvable overlap between spectral types' effective temperature and luminosity for some masses and ages of different L-T-Y types, no distinct temperature or luminosity values can be given.
Class L dwarfs get their designation because they are cooler than M stars and L is the remaining letter alphabetically closest to M. Some of these objects have masses large enough to support hydrogen fusion and are therefore stars, but most are of substellar mass and are therefore brown dwarfs. They are a very dark red in color and brightest in infrared. Their atmosphere is cool enough to allow metal hydrides and alkali metals to be prominent in their spectra.
Due to low surface gravity in giant stars, TiO- and VO-bearing condensates never form. Thus, L-type stars larger than dwarfs can never form in an isolated environment. However, it may be possible for these L-type supergiants to form through stellar collisions, an example of which is V838 Monocerotis while in the height of its luminous red nova eruption.
Class T: methane dwarfs
Class T dwarfs are cool brown dwarfs with surface temperatures between approximately 550 and 1,300 K (277 and 1,027 °C; 530 and 1,880 °F). Their emission peaks in the infrared. Methane is prominent in their spectra.
Classes T and L could be more common than all the other classes combined if recent research is accurate. Because brown dwarfs persist for so long—a few times the age of the universe—in the absence of catastrophic collisions these smaller bodies can only increase in number.
Study of the number of proplyds (protoplanetary disks, clumps of gas in nebulae from which stars and planetary systems are formed) indicates that the number of stars in the galaxy should be several orders of magnitude higher than what was previously conjectured. It is theorized that these proplyds are in a race with each other. The first one to form will become a protostar, which are very violent objects and will disrupt other proplyds in the vicinity, stripping them of their gas. The victim proplyds will then probably go on to become main-sequence stars or brown dwarfs of the L and T classes, which are quite invisible to us.
Brown dwarfs of spectral class Y are cooler than those of spectral class T and have qualitatively different spectra from them. A total of 17 objects have been placed in class Y as of August 2013. Although such dwarfs have been modelled and detected within forty light-years by the Wide-field Infrared Survey Explorer (WISE) there is no well-defined spectral sequence yet and no prototypes. Nevertheless, several objects have been proposed as spectral classes Y0, Y1, and Y2.
The spectra of these prospective Y objects display absorption around 1.55 micrometers. Delorme et al. have suggested that this feature is due to absorption from ammonia, and that this should be taken as the indicative feature for the T-Y transition. In fact, this ammonia-absorption feature is the main criterion that has been adopted to define this class. However, this feature is difficult to distinguish from absorption by water and methane, and other authors have stated that the assignment of class Y0 is premature.
The latest brown dwarf proposed for the Y spectral type, WISE 1828+2650, is a > Y2 dwarf with an effective temperature originally estimated around 300 K, the temperature of the human body. Parallax measurements have, however, since shown that its luminosity is inconsistent with it being colder than ~400 K. The coolest Y dwarf currently known is WISE 0855−0714 with an approximate temperature of 250 K.
The mass range for Y dwarfs is 9–25 Jupiter masses, but young objects might reach below one Jupiter mass, which means that Y class objects straddle the 13 Jupiter mass deuterium-fusion limit that marks the current IAU division between brown dwarfs and planets.
Late giant carbon-star classes
Carbon-stars are stars whose spectra indicate production of carbon—a byproduct of triple-alpha helium fusion. With increased carbon abundance, and some parallel s-process heavy element production, the spectra of these stars become increasingly deviant from the usual late spectral classes G, K, and M. Equivalent classes for carbon-rich stars are S and C.
The giants among those stars are presumed to produce this carbon themselves, but some stars in this class are double stars, whose odd atmosphere is suspected of having been transferred from a companion that is now a white dwarf, when the companion was a carbon-star.
Class C: carbon stars
Originally classified as R and N stars, these are also known as carbon stars. These are red giants, near the end of their lives, in which there is an excess of carbon in the atmosphere. The old R and N classes ran parallel to the normal classification system from roughly mid G to late M. These have more recently been remapped into a unified carbon classifier C with N0 starting at roughly C6. Another subset of cool carbon stars are the C-J type stars, which are characterized by the strong presence of molecules of 13CN in addition to those of 12CN. A few main-sequence carbon stars are known, but the overwhelming majority of known carbon stars are giants or supergiants. There are several subclasses:
- C-R – Formerly its own class (R) representing the carbon star equivalent of late G to early K-type stars.
- C-N – Formerly its own class representing the carbon star equivalent of late K to M-type stars.
- C-J – A subtype of cool C stars with a high content of 13C.
- C-H – Population II analogues of the C-R stars.
- C-Hd – Hydrogen-deficient carbon stars, similar to late G supergiants with CH and C2 bands added.
Class S stars form a continuum between class M stars and carbon stars. Those most similar to class M stars have strong ZrO absorption bands analogous to the TiO bands of class M stars, whereas those most similar to carbon stars have strong sodium D lines and weak C2 bands. Class S stars have excess amounts of zirconium and other elements produced by the s-process, and have more similar carbon and oxygen abundances than class M or carbon stars. Like carbon stars, nearly all known class S stars are asymptotic-giant-branch stars.
The spectral type is formed by the letter S and a number between zero and ten. This number corresponds to the temperature of the star and approximately follows the temperature scale used for class M giants. The most common types are S3 to S5. The non-standard designation S10 has only been used for the star Chi Cygni when at an extreme minimum.
The basic classification is usually followed by an abundance indication, following one of several schemes: S2,5; S2/5; S2 Zr4 Ti2; or S2*5. A number following a comma is a scale between 1 and 9 based on the ratio of ZrO and TiO. A number following a slash is a more recent but less common scheme designed to represent the ratio of carbon to oxygen on a scale of 1 to 10, where a 0 would be an MS star. Intensities of zirconium and titanium may be indicated explicitly. Also occasionally seen is a number following an asterisk, which represents the strength of the ZrO bands on a scale from 1 to 5.
In between the M and S classes, border cases are named MS stars. In a similar way, border cases between the S and C-N classes are named SC or CS. The sequence M → MS → S → SC → C-N is hypothesized to be a sequence of increased carbon abundance with age for carbon stars in the asymptotic giant branch.
White dwarf classifications
The class D (for Degenerate) is the modern classification used for white dwarfs – low-mass stars that are no longer undergoing nuclear fusion and have shrunk to planetary size, slowly cooling down. Class D is further divided into spectral types DA, DB, DC, DO, DQ, DX, and DZ. The letters are not related to the letters used in the classification of other stars, but instead indicate the composition of the white dwarf's visible outer layer or atmosphere.
- DA – a hydrogen-rich atmosphere or outer layer, indicated by strong Balmer hydrogen spectral lines.
- DB – a helium-rich atmosphere, indicated by neutral helium, He I, spectral lines.
- DO – a helium-rich atmosphere, indicated by ionized helium, He II, spectral lines.
- DQ – a carbon-rich atmosphere, indicated by atomic or molecular carbon lines.
- DZ – a metal-rich atmosphere, indicated by metal spectral lines (a merger of the obsolete white dwarf spectral types, DG, DK and DM).
- DC – no strong spectral lines indicating one of the above categories.
- DX – spectral lines are insufficiently clear to classify into one of the above categories.
The type is followed by a number giving the white dwarf's surface temperature. This number is a rounded form of 50400/Teff, where Teff is the effective surface temperature, measured in kelvins. Originally, this number was rounded to one of the digits 1 through 9, but more recently fractional values have started to be used, as well as values below 1 and above 9.
Two or more of the type letters may be used to indicate a white dwarf that displays more than one of the spectral features above.
Extended white dwarf spectral types:
- DAB – a hydrogen- and helium-rich white dwarf displaying neutral helium lines.
- DAO – a hydrogen- and helium-rich white dwarf displaying ionized helium lines.
- DAZ – a hydrogen-rich metallic white dwarf.
- DBZ – a helium-rich metallic white dwarf.
A different set of spectral peculiarity symbols are used for white dwarfs than for other types of stars:
|Code||Spectral peculiarities for stars|
|P||Magnetic white dwarf with detectable polarization|
|E||Emission lines present|
|H||Magnetic white dwarf without detectable polarization|
|PEC||Spectral peculiarities exist|
Non-stellar spectral types: Classes P and Q
Finally, the classes P and Q, left over from the Draper system by Cannon, are occasionally used for certain non-stellar objects. Type P objects are stars within planetary nebulae and type Q objects are novae.
Stellar remnants are objects associated with the death of stars. Included in the category are white dwarfs, and as can be seen from the radically different classification scheme for class D, non-stellar objects are difficult to fit into the MK system.
The Hertzsprung-Russell diagram, which the MK system is based on, is observational in nature so these remnants cannot easily be plotted on the diagram, or cannot be placed at all. Old neutron stars are relatively small and cold, and would fall on the far right side of the diagram. Planetary nebulae are dynamic and tend to quickly fade in brightness as the progenitor star transitions to the white dwarf branch. If shown, a planetary nebula would be plotted to the right of the diagram's upper right quadrant. A black hole emits no visible light of its own, and therefore would not appear on the diagram.
Replaced spectral classes
Several spectral types, all previously used for non-standard stars in the mid-20th century, have been replaced during revisions of the stellar classification system. They may still be found in old editions of star catalogs: R and N have been subsumed into the new C class as C-R and C-N.
Stellar classification, habitability, and the search for life
Humans may eventually be able to colonize any kind of stellar habitat, this section will address the probability of life arising around other stars.
Stability, luminosity, and lifespan are all factors in stellar habitability. We only know of one star that hosts life, and that is our own; a G class star with an abundance of heavy elements and low variability in brightness. It is also unlike many stellar systems in that it only has one star in it (see Planetary habitability, under the binary systems section).
Working from these constraints and the problems of having an empirical sample set of only one, the range of stars that are predicted to be able to support life as we know it is limited by a few factors. Of the main-sequence star types, stars more massive than 1.5 times that of the Sun (spectral types O, B, and A) age too quickly for advanced life to develop (using Earth as a guideline). On the other extreme, dwarfs of less than half the mass of our Sun (spectral type M) are likely to tidally lock planets within their habitable zone, along with other problems (see Habitability of red dwarf systems). While there are many problems facing life on red dwarfs, due to their sheer numbers and longevity many astronomers continue to model these systems.
For these reasons NASA's Kepler Mission is searching for habitable planets at nearby main sequence stars that are less massive than spectral type A but more massive than type M -- making the most probable stars to host life dwarf stars of types F, G, and K.
- This is the relative color of the star if Vega, generally considered a bluish star, is used as a standard for "white".
- Chromaticity can vary significantly within a class; for example, the Sun (a G2 star) is white, while a G9 star is yellow.
- Technically, white dwarfs are no longer “live” stars, but rather the “dead” remains of extinguished stars. Their classification uses a different set of spectral types from element-burning “live” stars.
- When used with A-type stars, this instead refers to abnormally strong metallic spectral lines
- These proportions are fractions of stars brighter than absolute magnitude 16; lowering this limit will render earlier types even rarer, whereas generally adding only to the M class.
- This rises to 78.6% if we include all stars. (See the above note.)
- Habets, G. M. H. J.; Heinze, J. R. W. (November 1981). "Empirical bolometric corrections for the main-sequence". Astronomy and Astrophysics Supplement Series. 46: 193–237 (Tables VII and VIII). Bibcode:1981A&AS...46..193H. – Luminosities are derived from Mbol figures, using Mbol(☉)=4.75.
- Weidner, Carsten; Vink, Jorick S. (December 2010). "The masses, and the mass discrepancy of O-type stars". Astronomy and Astrophysics. 524. A98. arXiv:1010.2204. Bibcode:2010A&A...524A..98W. doi:10.1051/0004-6361/201014491.
- Charity, Mitchell. "What color are the stars?". Vendian.org. Retrieved 13 May 2006.
- "The Colour of Stars". Australia Telescope National Facility. 2018-10-17.
- Moore, Patrick (1992). The Guinness Book of Astronomy: Facts & Feats (4th ed.). Guinness. ISBN 978-0-85112-940-2.
- "The Colour of Stars". Australia Telescope Outreach and Education. 21 December 2004. Retrieved 26 September 2007. — Explains the reason for the difference in color perception.
- Baraffe, I.; Chabrier, G.; Barman, T. S.; Allard, F.; Hauschildt, P. H. (May 2003). "Evolutionary models for cool brown dwarfs and extrasolar giant planets. The case of HD 209458". Astronomy and Astrophysics. 402 (2): 701–712. arXiv:astro-ph/0302293. Bibcode:2003A&A...402..701B. doi:10.1051/0004-6361:20030252.
- Ledrew, Glenn (February 2001). "The Real Starry Sky". Journal of the Royal Astronomical Society of Canada. 95: 32. Bibcode:2001JRASC..95...32L.
- Sota, A.; Maíz Apellániz, J.; Morrell, N. I.; Barbá, R. H.; Walborn, N. R.; et al. (March 2014). "The Galactic O-Star Spectroscopic Survey (GOSSS). II. Bright Southern Stars". The Astrophysical Journal Supplement Series. 211 (1). 10. arXiv:1312.6222. Bibcode:2014ApJS..211...10S. doi:10.1088/0067-0049/211/1/10.
- Phillips, Kenneth J. H. (1995). Guide to the Sun. Cambridge University Press. pp. 47–53. ISBN 978-0-521-39788-9.
- Russell, Henry Norris (March 1914). "Relations Between the Spectra and Other Characteristics of the Stars". Popular Astronomy. Vol. 22. pp. 275–294. Bibcode:1914PA.....22..275R.
- Saha, M. N. (May 1921). "On a Physical Theory of Stellar Spectra". Proceedings of the Royal Society of London. Series A. 99 (697): 135–153. Bibcode:1921RSPSA..99..135S. doi:10.1098/rspa.1921.0029.
- Payne, Cecilia Helena (1925). Stellar Atmospheres; a Contribution to the Observational Study of High Temperature in the Reversing Layers of Stars (Ph.D). Radcliffe College. Bibcode:1925PhDT.........1P.
- Pickles, A. J. (July 1998). "A Stellar Spectral Flux Library: 1150-25000 Å". Publications of the Astronomical Society of the Pacific. 110 (749): 863–878. Bibcode:1998PASP..110..863P. doi:10.1086/316197.
- Morgan, William Wilson; Keenan, Philip Childs; Kellman, Edith (1943). An atlas of stellar spectra, with an outline of spectral classification. The University of Chicago Press. Bibcode:1943assw.book.....M. OCLC 1806249.
- Morgan, William Wilson; Keenan, Philip Childs (1973). "Spectral Classification". Annual Review of Astronomy and Astrophysics. 11: 29–50. Bibcode:1973ARA&A..11...29M. doi:10.1146/annurev.aa.11.090173.000333.
- "A note on the spectral atlas and spectral classification". Centre de données astronomiques de Strasbourg. Retrieved 2 January 2015.
- Caballero-Nieves, S. M.; Nelan, E. P.; Gies, D. R.; Wallace, D. J.; DeGioia-Eastwood, K.; et al. (February 2014). "A High Angular Resolution Survey of Massive Stars in Cygnus OB2: Results from the Hubble Space Telescope Fine Guidance Sensors". The Astronomical Journal. 147 (2). 40. arXiv:1311.5087. Bibcode:2014AJ....147...40C. doi:10.1088/0004-6256/147/2/40.
- Prinja, R. K.; Massa, D. L. (October 2010). "Signature of wide-spread clumping in B supergiant winds". Astronomy and Astrophysics. 521. L55. arXiv:1007.2744. Bibcode:2010A&A...521L..55P. doi:10.1051/0004-6361/201015252.
- Gray, David F. (November 2010). "Photospheric Variations of the Supergiant γ Cyg". The Astronomical Journal. 140 (5): 1329–1336. Bibcode:2010AJ....140.1329G. doi:10.1088/0004-6256/140/5/1329.
- Nazé, Y. (November 2009). "Hot stars observed by XMM-Newton. I. The catalog and the properties of OB stars". Astronomy and Astrophysics. 506 (2): 1055–1064. arXiv:0908.1461. Bibcode:2009A&A...506.1055N. doi:10.1051/0004-6361/200912659.
- Lyubimkov, Leonid S.; Lambert, David L.; Rostopchin, Sergey I.; Rachkovskaya, Tamara M.; Poklad, Dmitry B. (February 2010). "Accurate fundamental parameters for A-, F- and G-type Supergiants in the solar neighbourhood". Monthly Notices of the Royal Astronomical Society. 402 (2): 1369–1379. arXiv:0911.1335. Bibcode:2010MNRAS.402.1369L. doi:10.1111/j.1365-2966.2009.15979.x.
- Gray, R. O.; Corbally, C. J.; Garrison, R. F.; McFadden, M. T.; Robinson, P. E. (October 2003). "Contributions to the Nearby Stars (NStars) Project: Spectroscopy of Stars Earlier than M0 within 40 Parsecs: The Northern Sample. I". The Astronomical Journal. 126 (4): 2048–2059. arXiv:astro-ph/0308182. Bibcode:2003AJ....126.2048G. doi:10.1086/378365.
- Cenarro, A. J.; Peletier, R. F.; Sanchez-Blazquez, P.; Selam, S. O.; Toloba, E.; Cardiel, N.; Falcon-Barroso, J.; Gorgas, J.; Jimenez-Vicente, J.; Vazdekis, A. (January 2007). "Medium-resolution Isaac Newton Telescope library of empirical spectra - II. The stellar atmospheric parameters". Monthly Notices of the Royal Astronomical Society. 374 (2): 664–690. arXiv:astro-ph/0611618. Bibcode:2007MNRAS.374..664C. doi:10.1111/j.1365-2966.2006.11196.x.
- Sion, Edward M.; Holberg, J. B.; Oswalt, Terry D.; McCook, George P.; Wasatonic, Richard (December 2009). "The White Dwarfs Within 20 Parsecs of the Sun: Kinematics and Statistics". The Astronomical Journal. 138 (6): 1681–1689. arXiv:0910.1288. Bibcode:2009AJ....138.1681S. doi:10.1088/0004-6256/138/6/1681.
- Smith, Myron A.; et al. (2011). "An Encoding System to Represent Stellar Spectral Classes in Archival Databases and Catalogs". arXiv:1112.3617 [astro-ph.SR].
- Arias, Julia I.; et al. (August 2016). "Spectral Classification and Properties of the OVz Stars in the Galactic O Star Spectroscopic Survey (GOSSS)". The Astronomical Journal. 152 (2): 31. arXiv:1604.03842. Bibcode:2016AJ....152...31A. doi:10.3847/0004-6256/152/2/31.
- MacRobert, Alan (1 August 2006). "The Spectral Types of Stars". Sky & Telescope.
- Allen, J. S. "The Classification of Stellar Spectra". UCL Department of Physics and Astronomy: Astrophysics Group. Retrieved 1 January 2014.
- Maíz Apellániz, J.; Walborn, Nolan R.; Morrell, N. I.; Niemela, V. S.; Nelan, E. P. (2007). "Pismis 24-1: The Stellar Upper Mass Limit Preserved". The Astrophysical Journal. 660 (2): 1480–1485. arXiv:astro-ph/0612012. Bibcode:2007ApJ...660.1480M. doi:10.1086/513098.
- Fariña, Cecilia; Bosch, Guillermo L.; Morrell, Nidia I.; Barbá, Rodolfo H.; Walborn, Nolan R. (2009). "Spectroscopic Study of the N159/N160 Complex in the Large Magellanic Cloud". The Astronomical Journal. 138 (2): 510–516. arXiv:0907.1033. Bibcode:2009AJ....138..510F. doi:10.1088/0004-6256/138/2/510.
- Rauw, G.; Manfroid, J.; Gosset, E.; Nazé, Y.; Sana, H.; De Becker, M.; Foellmi, C.; Moffat, A. F. J. (2007). "Early-type stars in the core of the young open cluster Westerlund 2". Astronomy and Astrophysics. 463 (3): 981–991. arXiv:astro-ph/0612622. Bibcode:2007A&A...463..981R. doi:10.1051/0004-6361:20066495.
- Crowther, Paul A. (2007). "Physical Properties of Wolf-Rayet Stars". Annual Review of Astronomy & Astrophysics. 45 (1): 177–219. arXiv:astro-ph/0610356. Bibcode:2007ARA&A..45..177C. doi:10.1146/annurev.astro.45.051806.110615.
- Rountree Lesh, J. (1968). "The Kinematics of the Gould Belt: An Expanding Group?". The Astrophysical Journal Supplement Series. 17: 371. Bibcode:1968ApJS...17..371L. doi:10.1086/190179.
- Analyse spectrale de la lumière de quelques étoiles, et nouvelles observations sur les taches solaires, P. Secchi, Comptes Rendus des Séances de l'Académie des Sciences 63 (July–December 1866), pp. 364–368.
- Nouvelles recherches sur l'analyse spectrale de la lumière des étoiles, P. Secchi, Comptes Rendus des Séances de l'Académie des Sciences 63 (July–December 1866), pp. 621–628.
- Hearnshaw, J. B. (1986). The Analysis of Starlight: One Hundred and Fifty Years of Astronomical Spectroscopy. Cambridge, UK: Cambridge University Press. pp. 60, 134. ISBN 978-0-521-25548-6.
- Classification of Stellar Spectra: Some History
- Kaler, James B. (1997). Stars and Their Spectra: An Introduction to the Spectral Sequence. Cambridge: Cambridge University Press. pp. 62–63. ISBN 978-0-521-58570-5.
- p. 60–63, Hearnshaw 1986; pp. 623–625, Secchi 1866.
- pp. 62–63, Hearnshaw 1986.
- p. 60, Hearnshaw 1986.
- Catchers of the Light: The Forgotten Lives of the Men and Women Who First Photographed the Heavens by Stefan Hughes.
- Pickering, Edward C. (1890). "The Draper Catalogue of stellar spectra photographed with the 8-inch Bache telescope as a part of the Henry Draper memorial". Annals of Harvard College Observatory. 27: 1. Bibcode:1890AnHar..27....1P.
- pp. 106–108, Hearnshaw 1986.
- pp. 111–112, Hearnshaw 1986.
- Maury, Antonia C.; Pickering, Edward C. (1897). "Spectra of bright stars photographed with the 11 inch Draper Telescope as part of the Henry Draper Memorial". Annals of Harvard College Observatory. 28: 1. Bibcode:1897AnHar..28....1M.
- Cannon, Annie J.; Pickering, Edward C. (1901). "Spectra of bright southern stars photographed with the 13 inch Boyden telescope as part of the Henry Draper Memorial". Annals of Harvard College Observatory. 28: 129. Bibcode:1901AnHar..28..129C.
- pp. 117–119, Hearnshaw 1986.
- Cannon, Annie Jump; Pickering, Edward Charles (1912). "Classification of 1,688 southern stars by means of their spectra". Annals of the Astronomical Observatory of Harvard College. 56 (5): 115. Bibcode:1912AnHar..56..115C.
- pp. 121–122, Hearnshaw 1986.
- "SPECTRAL CLASSIFICATION OF STARS". www.eudesign.com. Retrieved 2019-04-06.
- Nassau, J. J.; Seyfert, Carl K. (March 1946). "Spectra of BD Stars Within Five Degrees of the North Pole". Astrophysical Journal. 103: 117. Bibcode:1946ApJ...103..117N. doi:10.1086/144796.
- FitzGerald, M. Pim (October 1969). "Comparison Between Spectral-Luminosity Classes on the Mount Wilson and Morgan-Keenan Systems of Classification". Journal of the Royal Astronomical Society of Canada. 63: 251. Bibcode:1969JRASC..63..251P.
- Sandage, A. (December 1969). "New subdwarfs. II. Radial velocities, photometry, and preliminary space motions for 112 stars with large proper motion". Astrophysical Journal. 158: 1115. Bibcode:1969ApJ...158.1115S. doi:10.1086/150271.
- Norris, Jackson M.; Wright, Jason T.; Wade, Richard A.; Mahadevan, Suvrath; Gettel, Sara (December 2011). "Non-detection of the Putative Substellar Companion to HD 149382". The Astrophysical Journal. 743 (1). 88. arXiv:1110.1384. Bibcode:2011ApJ...743...88N. doi:10.1088/0004-637X/743/1/88.
- Garrison, R. F. (1994). "A Hierarchy of Standards for the MK Process". Astronomical Society of the Pacific. 60: 3. Bibcode:1994ASPC...60....3G.
- Darling, David. "late-type star". The Internet Encyclopedia of Science. Retrieved 14 October 2007.
- Walborn, N. R. (2008). "Multiwavelength Systematics of OB Spectra". Massive Stars: Fundamental Parameters and Circumstellar Interactions (Eds. P. Benaglia. 33: 5. Bibcode:2008RMxAC..33....5W.
- An atlas of stellar spectra, with an outline of spectral classification, W. W. Morgan, P. C. Keenan and E. Kellman, Chicago: The University of Chicago Press, 1943.
- Walborn, N. R. (1971). "Some Spectroscopic Characteristics of the OB Stars: An Investigation of the Space Distribution of Certain OB Stars and the Reference Frame of the Classification". The Astrophysical Journal Supplement Series. 23: 257. Bibcode:1971ApJS...23..257W. doi:10.1086/190239.
- Morgan, W. W.; Abt, Helmut A.; Tapscott, J. W. (1978). "Revised MK Spectral Atlas for stars earlier than the sun". Williams Bay: Yerkes Observatory. Bibcode:1978rmsa.book.....M.
- Walborn, Nolan R.; Howarth, Ian D.; Lennon, Daniel J.; Massey, Philip; Oey, M. S.; Moffat, Anthony F. J.; Skalkowski, Gwen; Morrell, Nidia I.; Drissen, Laurent; Parker, Joel Wm. (2002). "A New Spectral Classification System for the Earliest O Stars: Definition of Type O2". The Astronomical Journal. 123 (5): 2754–2771. Bibcode:2002AJ....123.2754W. doi:10.1086/339831.
- Slettebak, Arne (July 1988). "The Be Stars". Publications of the Astronomical Society of the Pacific. 100: 770–784. Bibcode:1988PASP..100..770S. doi:10.1086/132234.
- "SIMBAD Object query : CCDM J02319+8915". SIMBAD. Centre de Données astronomiques de Strasbourg. Retrieved 10 June 2010.
- Nieuwenhuijzen, H.; De Jager, C. (2000). "Checking the yellow evolutionary void. Three evolutionary critical Hypergiants: HD 33579, HR 8752 & IRC +10420". Astronomy and Astrophysics. 353: 163. Bibcode:2000A&A...353..163N.
- "On a cosmological timescale, The Earth's period of habitability is nearly over | International Space Fellowship". Spacefellowship.com. Retrieved 22 May 2012.
- Stars as Cool as the Human Body
- "Galactic refurbishment". www.spacetelescope.org. ESA/Hubble. Retrieved 29 April 2015.
- Figer, Donald F.; McLean, Ian S.; Najarro, Francisco (1997). "AK‐Band Spectral Atlas of Wolf‐Rayet Stars". The Astrophysical Journal. 486 (1): 420–434. Bibcode:1997ApJ...486..420F. doi:10.1086/304488.
- Kingsburgh, R. L.; Barlow, M. J.; Storey, P. J. (1995). "Properties of the WO Wolf-Rayet stars". Astronomy and Astrophysics. 295: 75. Bibcode:1995A&A...295...75K.
- Tinkler, C. M.; Lamers, H. J. G. L. M. (2002). "Mass-loss rates of H-rich central stars of planetary nebulae as distance indicators?". Astronomy and Astrophysics. 384 (3): 987–998. Bibcode:2002A&A...384..987T. doi:10.1051/0004-6361:20020061.
- Miszalski, B.; Crowther, P. A.; De Marco, O.; Köppen, J.; Moffat, A. F. J.; Acker, A.; Hillwig, T. C. (2012). "IC 4663: The first unambiguous [WN] Wolf-Rayet central star of a planetary nebula". Monthly Notices of the Royal Astronomical Society. 423 (1): 934–947. arXiv:1203.3303. Bibcode:2012MNRAS.423..934M. doi:10.1111/j.1365-2966.2012.20929.x.
- Crowther, P. A.; Walborn, N. R. (2011). "Spectral classification of O2-3.5 If*/WN5-7 stars". Monthly Notices of the Royal Astronomical Society. 416 (2): 1311–1323. arXiv:1105.4757. Bibcode:2011MNRAS.416.1311C. doi:10.1111/j.1365-2966.2011.19129.x.
- Kirkpatrick, J. D. (2008). "Outstanding Issues in Our Understanding of L, T, and Y Dwarfs". 14th Cambridge Workshop on Cool Stars. 384: 85. arXiv:0704.1522. Bibcode:2008ASPC..384...85K.
- Kirkpatrick, J. Davy; Reid, I. Neill; Liebert, James; Cutri, Roc M.; Nelson, Brant; Beichman, Charles A.; Dahn, Conard C.; Monet, David G.; Gizis, John E.; Skrutskie, Michael F. (10 July 1999). "Dwarfs Cooler than M: the Definition of Spectral Type L Using Discovery from the 2-µ ALL-SKY Survey (2MASS)". The Astrophysical Journal. 519 (2): 802–833. Bibcode:1999ApJ...519..802K. doi:10.1086/307414.
- Kirkpatrick, J. Davy (2005). "New Spectral Types L and T". Annual Review of Astronomy and Astrophysics. 43 (1): 195–246. Bibcode:2005ARA&A..43..195K. doi:10.1146/annurev.astro.42.053102.134017.
- Kirkpatrick, J. Davy; Barman, Travis S.; Burgasser, Adam J.; McGovern, Mark R.; McLean, Ian S.; Tinney, Christopher G.; Lowrance, Patrick J. (2006). "Discovery of a Very Young Field L Dwarf, 2MASS J01415823−4633574". The Astrophysical Journal. 639 (2): 1120–1128. arXiv:astro-ph/0511462. Bibcode:2006ApJ...639.1120K. doi:10.1086/499622.
- Kirkpatrick, J. Davy; Cushing, Michael C.; Gelino, Christopher R.; Beichman, Charles A.; Tinney, C. G.; Faherty, Jacqueline K.; Schneider, Adam; Mace, Gregory N. (2013). "Discovery of the Y1 Dwarf WISE J064723.23-623235.5". The Astrophysical Journal. 776 (2): 128. arXiv:1308.5372. Bibcode:2013ApJ...776..128K. doi:10.1088/0004-637X/776/2/128.
- Y-Spectral class for Ultra-Cool Dwarfs, N.R.Deacon and N.C.Hambly, 2006
- Wehner, Mike (24 August 2011). "NASA spots chilled-out stars cooler than the human body | Technology News Blog – Yahoo! News Canada". Ca.news.yahoo.com. Retrieved 22 May 2012.
- NASA spots chilled-out stars cooler than the human body
- NASA'S Wise Mission Discovers Coolest Class of Stars
- Zuckerman, B.; Song, I. (2009). "The minimum Jeans mass, brown dwarf companion IMF, and predictions for detection of Y-type dwarfs". Astronomy and Astrophysics. 493 (3): 1149–1154. arXiv:0811.0429. Bibcode:2009A&A...493.1149Z. doi:10.1051/0004-6361:200810038.
- Dupuy, T. J.; Kraus, A. L. (2013). "Distances, Luminosities, and Temperatures of the Coldest Known Substellar Objects". Science. 341 (6153): 1492–5. arXiv:1309.1422. Bibcode:2013Sci...341.1492D. doi:10.1126/science.1241917. PMID 24009359.
- Leggett, S. K.; Cushing, Michael C.; Saumon, D.; Marley, M. S.; Roellig, T. L.; Warren, S. J.; Burningham, Ben; Jones, H. R. A.; Kirkpatrick, J. D.; Lodieu, N.; Lucas, P. W.; Mainzer, A. K.; Martín, E. L.; McCaughrean, M. J.; Pinfield, D. J.; Sloan, G. C.; Smart, R. L.; Tamura, M.; Van Cleve, J. (2009). "The Physical Properties of Four ∼600 K T Dwarfs". The Astrophysical Journal. 695 (2): 1517–1526. arXiv:0901.4093. Bibcode:2009ApJ...695.1517L. doi:10.1088/0004-637X/695/2/1517.
- Delorme, P.; Delfosse, X.; Albert, L.; Artigau, E.; Forveille, T.; Reylé, C.; Allard, F.; Homeier, D.; Robin, A. C.; Willott, C. J.; Liu, M. C.; Dupuy, T. J. (2008). "CFBDS J005910.90-011401.3: Reaching the T-Y brown dwarf transition?". Astronomy and Astrophysics. 482 (3): 961–971. arXiv:0802.4387. Bibcode:2008A&A...482..961D. doi:10.1051/0004-6361:20079317.
- Burningham, Ben; Pinfield, D. J.; Leggett, S. K.; Tamura, M.; Lucas, P. W.; Homeier, D.; Day-Jones, A.; Jones, H. R. A.; Clarke, J. R. A.; Ishii, M.; Kuzuhara, M.; Lodieu, N.; Zapatero Osorio, M. R.; Venemans, B. P.; Mortlock, D. J.; Barrado y Navascués, D.; Martin, E. L.; Magazzù, A. (2008). "Exploring the substellar temperature regime down to ∼550 K". Monthly Notices of the Royal Astronomical Society. 391 (1): 320–333. arXiv:0806.0067. Bibcode:2008MNRAS.391..320B. doi:10.1111/j.1365-2966.2008.13885.x.
- European Southern Observatory. "A Very Cool Pair of Brown Dwarfs", 23 March 2011
- Luhman, Kevin L.; Esplin, Taran L. (May 2016). "The Spectral Energy Distribution of the Coldest Known Brown Dwarf". The Astronomical Journal. 152 (3): 78. arXiv:1605.06655. Bibcode:2016AJ....152...78L. doi:10.3847/0004-6256/152/3/78.
- Bouigue, R. (1954). Annales d'Astrophysique, Vol. 17, p. 104
- Keenan, P. C. (1954). Astrophysical Journal, vol. 120, p. 484
- Sion, E. M.; Greenstein, J. L.; Landstreet, J. D.; Liebert, J.; Shipman, H. L.; Wegner, G. A. (1983). "A proposed new white dwarf spectral classification system". Astrophysical Journal. 269: 253. Bibcode:1983ApJ...269..253S. doi:10.1086/161036.
- Córsico, A. H.; Althaus, L. G. (2004). "The rate of period change in pulsating DB-white dwarf stars". Astronomy and Astrophysics. 428: 159–170. arXiv:astro-ph/0408237. Bibcode:2004A&A...428..159C. doi:10.1051/0004-6361:20041372.
- McCook, George P.; Sion, Edward M. (1999). "A Catalog of Spectroscopically Identified White Dwarfs". The Astrophysical Journal Supplement Series. 121 (1): 1–130. Bibcode:1999ApJS..121....1M. CiteSeerX 10.1.1.565.5507. doi:10.1086/313186.
- "Pulsating Variable Stars and the Hertzsprung-Russell (H-R) Diagram". Harvard-Smithsonian Center for Astrophysics. 9 March 2015. Retrieved 23 July 2016.
|Look up late-type star or early-type star in Wiktionary, the free dictionary.|
- Libraries of stellar spectra by D. Montes, UCM
- Spectral Types for Hipparcos Catalogue Entries
- Stellar Spectral Classification by Richard O. Gray and Christopher J. Corbally
- Spectral models of stars by P. Coelho
- Merrifield, Michael; Bauer, Amanda; Häußler, Boris (2010). "Star Classification". Sixty Symbols. Brady Haran for the University of Nottingham.
- Stellar classification table
|
6. End of Pleistocene
|1. Introduction||2. 8.200 Cold Period|
|3. Climatic Optimum||4. A Green Sahara|
|5. After Optimum||6. Roman Warm Period|
|7. Migration Time Cold Period||8. Medieval Warm Period|
|9. Little Ice Age||10. Modern Warm Period|
|11. Sun Spots||12. Literature|
If you want to know what the climate was like in the Holocene, simply take some outerwear and go out into nature and look around. Holocene refers namely to the present since the end of the Weichsel glaciation.
The interglacial Holocene has supported the development and growth of human civilizations, it has been the cradle of civilization, not to say their uterus. It started around 11,700 years before present with a sudden warming from the cold period called Younger Dryas. In only ten years time the temperature in Greenland rose with an impressive 8 degrees, which corresponds to that North Europe's climate was replaced with a Mediterranean climate. It is not known, what caused this rapid rise in temperature.
Cenozoic is the period of the mammals, which followed the Mesozoic that was the period of dinosaurs. Tertiary is that part of Cenozoic, where no humans
existed , and Quaternary means the part of Cenozoic, where humans exist. Quaternary is composed of Pleistocene and Holocene. Pleistocene is the period that we in common language call the Ice Age. Holocene represents the
present, which basically is a Pleistocene interglacial period. Holocene is represented by the thin red line on the far left. The climate of the Holocene is the subject of this Article.
During the following one thousand years, the temperature increased so that climate became several degrees warmer than today. About 8,000 years before present, in Hunter Stone Age, occurred the hottest period throughout the Holocene. This initiated the warm period called the Holocene Optimum, which lasted almost until about 4,500 years before present, whereafter the temperature continued to drop through bronze age, iron age and historical time until it reached a low point in "The Little Ice Age" in the years 1600- 1700. Within the last few hundred years, the temperature has again increased, but not to such heights as in Hunter Stone Age.
This graph is taken from
Wikipedia. It shows eight different reconstructions of Holocene temperature. The thick black line is the average of these. Time progresses from left to right.
On this graph the Stone Age is shown only about one degree warmer than present day, but most sources mention that Scandinavian Stone Age was about 2-3 degrees warmer than the present; this need not to be mutually excluding statements, because the curve reconstructs the entire Earth's temperature, and on higher latitudes the temperature variations were greater than about equator.
Some reconstructions show a vertical dramatic increase in temperature around the year 2000, but it seems not reasonable to the author, since that kind of graphs cannot possibly show temperature in specific years, it must necessarily be smoothed by a kind of mathematical rolling average, perhaps with periods of hundred years, and then a high temperature in a single year, for example, 2004 will be much less visible.
The trend seems to be that Holocene's highest temperature was reached in the Hunter Stone Age about 8,000 years before present, thereafter the temperature has generally been steadily falling, however, superimposed by many cold and warm periods, including the modern warm period.
However, generally speaking, the Holocene represents an amazing stable climate, where the cooling through the period has been limited to a few degrees.
The general decline in temperatures since 8000 years before present was
overlaid by several cold and warm periods. Thus we speak of five to seven cold periods in Holocene, including the Little Ice Age and several warm periods, including the Minoan, Roman, Medieval and Modern Warm Periods.
Temperature variations in the Holocene compared with the previous Weichsel ice age based on analysis of Greenland ice cores. The vertical scale on the left shows the temperature on the surface of the ice, and the horizontal scale is years before present. Time progresses from left to right. It appears that the climate of the Holocene really has been very stable and the temperature has only varied a few degrees - The most dramatic event so far has been the 8,200 cold period and the ensuing Holocene maximum in the Stone Age. By contrast, the climate in the previous ice age was not nearly as stable. The temperature often varied more than 20 degrees during a few hundred years or perhaps less.
The Holocene cold and warm periods, however, represent only small
temperature changes compared to both the glaciation periods and the other
This unique climatic stability made the development of agriculture possible, it created the basis for the development of civilizations and enabled eventually the industrial revolution and consequently the modern world with its technique and myriads of people. Had we not had a window of about 10,000 years of stable climate with only small temperature variations, civilization would not have been nearly as developed, if at all existent, and Earth's population would have been only a fraction of the current.
Temperature variations in Holocene compared with the preceding interglacials - The vertical scale shows the temperature on the ice surface, and the
horizontal time scale is in thousands of years before present. It can be seen the Holocene temperature graph has a different shape than the previous interglacial periods. Holocene has a nearly flat top, which represents a fairly stable climate through ten thousand years, while the preceding interglacials are generally pointed, that is the temperature has risen to a maximum and then declined again, maybe after only a few hundred years. Only Holocene could offer a stable climate for a long time, during which agriculture and civilization could develop.
Note moreover almost all heating periods characteristic shape. The heat comes suddenly, perhaps in a few decades and then decreases slowly. This is also the case for the Holocene, except that temperature has dropped much slower than in the other interglacials and warming periods.
Many believe that the declining Milankovitch insolation is the cause of the
general cooling trend during the Holocene. The Milankovitch insolation is the theoretical insolation (received energy from the Sun) at 65 degrees northern latitude in June. Its variation in the Holocene is mainly due to changes in the axis tilt, such that the northern hemisphere, in the beginning, received a big June insolation, as it turned more directly against the sun during the summer, while the axis in the present is more upright, and therefore the northern hemisphere receives not so much solar radiation in summertime.
Temperature and Milankovitch insolation during Holocene.
The upper reddish graph represents the temperature in Celsius on the ice surface in Greenland. The curve is generally falling through the Holocene but overlaid by many cold and hot periods. Many operate with six cold spells, of which the best known are the 8,200 cold period and the Little Ice Age. The most famous hot periods are the Minoan, the Roman and the Medieval warm periods.
The Norwegian Axel Blytt and the Swede Rutger Sernander developed in the 1800's the Blytt-Sernander period breakdown of the Holocene climate based on studies of Danish peat bogs. It includes the periods Preboreal, Boreal, Atlantic, Subboreal and Subatlantic that are shown at the bottom of the figure. Preboreal is also known as the Birch-pine period. There are many different opinions about when the various Blytt-Sernander periods begin and end.
Today, some believe that this classification is outdated and prefer other divisions, which includes the Holocene Climatic Optimum, Postglacial and Neoglacial that all are shown at the top of this figure in colors.
The yellow-green graph shown below the temperature curve represents the theoretical Milankovitch insolation in Holocene in Watts per m2. - The Milankovitch insolation is solar radiation on 65. northern latitude in the month of June. It can be seen that insolation-maximum occurred about 10,000 years before present with about 470 W/m2, and since then insolation has been declining steadily down to today's low value that is slightly less than 430 W/m2.
In the very early Holocene, northern Europe became vegetated by an open and light
birch forest mixed with aspen, willow, mountain ash and pine. The ever milder climate caused average summer temperatures to rise to 18-20 degrees, while winter temperature stood at just below freezing. The composition of the forest trees changed; pine pushed back birch; hazel, elm, oak, ash, alder, fir and linden immigrated.
About 8,200 years ago, there was a sharp cooling in the Northern hemisphere. It has been attributed to an excessive supply of cold glacial meltwater
from glaciers in the Hudson Bay area. Data from Disko Bay show that here too
was a large production of melt water. Samples taken from the ocean floor at
Spitsbergen indicate that here the Arctic waters pushed further south
already 8,800 years ago.
From HOCLAT - A web-based Holocene Climate Atlas (see link below).
Reconstructed summer air temperature from pollen analysis of sediments from the bottom of a Swedish lake at 58.55 northern latitude and 13.67 eastern
longitude, which is a small Swedish lake between Vanern and Vattern.
The original data is the very thin line in the diagram at the top. It has been smoothed with a form of mathematical rolling average over 500 years, it is the blue line. In addition, the original data also have been mathematically smoothed over 3,000 years, it is the red line. The figure at the bottom shows how much the blue line deviates from the red line, which is a measure of climate change. The blue areas thus show how much the temperature of a cold period deviates from the more average temperature in this age, and the red areas show how much the temperature of a warming period exceeds the more average temperature in this age.
It is seen that the 8,200 cold spell represents a very severe climate change. It has been a sudden change for Stone Age hunters. Moreover, it looks as if the cold periods come at regular intervals. The cold Period 5.900 years before present took place at the transition from Hunter-Stone Age to the Peasant Stone Age. The cold period 3,500 was the beginning of the Bronze Age. The small cold period around 1,800 occurred a few hundred years after birth of Christ, perhaps it was at that time, the Goths left the island of Skanza (Scandinavia). The Little Ice Age seems to come somewhat early.
Many explain the 8,200 cold period as a result of a large discharge of cold meltwater into the Atlantic from Lake Agassiz at the edge of the Laurentide ice sheet in North America.
It may seem paradoxical that a warmer weather in the Arctic, which caused the melting of ice caps and sea ice and thus the production of cold fresh water, caused a colder climate in Northern Europe and probably also in North America. This is explained by that the large amounts of cold fresh water, that is lighter than salt water, disturbed the ocean currents, and a weakened Gulf Stream was the cause of colder weather along the North Atlantic coasts. Many believe that such meltwater mechanism also caused The Younger Dryas cold period.
Analysis of oxygen isotopes in stalagmites from Costa Rica shows a dry period around 8,200 before present. From "Tropical response to the 8200 yr B.P. cold event? --" by Matthew S. Lachniet with others.
Even more beyond comprehension is it that a team of American geologists from
the University of Buffalo and other universities have found that during the
cold period glaciers on Baffin Island increased. One can only conclude
that if there simultaneously was produced meltwater, precipitation must have been even very big in the region.
Analyses of oxygen isotopes in stalagmites from caves in Costa Rica have shown that there was a dry period about 8,200 years before present, caused by weaker monsoon and reduced precipitation in Central America. It questions the melt-water Gulf Stream theory, as the climate in Costa Rica's is not dependent of a warm Gulf Stream, this region belongs indeed the area near the equator, which supplies the heat to the Gulf Stream.
The cold period cannot, however, be detected in the Southern Hemisphere, neither in drill cores from the ice sheet in Antarctica glaciers in Bolivia or in samples taken from the seabed off the mouth of Murray River in Australia. This indicates that the cold period can have been a truly North Atlantic phenomenon, perhaps caused by variations in the sea currents.
But after a while, the sea currents in the North Atlantic found back to their old routes if that had been the cause, and about 8,000 years before present began the warmest period of the Holocene ever.
In sediments from the bottom of the lakes, Huelmo and Mascardi, in the Andes Mountains in respectively Chile and Argentina scientists have found evidence of a cold period on the Southern Hemisphere, which lasted 800 years and occurred between 11,400 and 10,200 years before present.
The parasitic plant mistletoe on a willow tree - During the Holocene optimum
parasitic plant mistletoe was widely found in southern Scandinavia. Today it grows further south, in southern England, Central and Southern Europe.
The hottest time in the Holocene occurred in the Stone Age about 8,000 years
before present, it is called the Holocene Maximum. This warm climate continued largely through 3,500 years until 4,500 before present, when it was Neolithic period in Northern Europe.
It is assumed that the average temperature was 2-3 degrees higher than today. This is supported by the fact that plants such as mistletoe and the subtropical aquatic plant Trapa natans grew widespread in south Scandinavia. Linden, elm, spruce and oak were the most common trees in northern Europe's dense forests, which closed the continent's interior into a big impenetrable forest.
In Denmark, scientists have studied Stone Age settlements from the Holocene Climatic Optimum's period and found bones of various terrestrial and marine animals, including swordfish, sturgeon, sardine and tuna, dalmatian pelican and pond turtle, all of which are species that today live in warmer climes.
Pine stub in Cairngorm Mountains that is 4,000 to 4,500-year-old.
In Cairngorm Mountains in central Scotland, you can find stubs of 4,000 - 4,500 old pine trees, which grew 650 meter above sea level. This altitude is slightly above the limit for dwarf trees and stunted trees today.
Another testimony of warmer climate in the past can be found in Dartmoor in Southern England, though slightly later than the Holocene Optimum. Here Bronze Age farmers cultivated the land in 450 meters above sea level, which should be compared with the absolute limit on agriculture today, that is an altitude of 300 meters.
A team of scientists from the University of Copenhagen have analyzed driftwood and beach ridges along the coast of north-eastern Greenland and thereby uncovered the extension of sea ice during the Holocene Optimum.
Driftwood, that end up on the coast of northeastern Greenland come from North America and Siberia. It has used several years to complete its journey and would only reach the coast of Greenland , if it is encased in ice, since free driftwood will sink to the bottom during such a long journey.
Driftwood on the beach of Spitsbergen.
By collecting and dating driftwood with the carbon-14 method, researchers could calculate the amount of sea ice in different time periods.
Svend Funder and his colleagues also examined beach ridges along the coast. Today beach ridges are not formed along the coast of Northern Greenland, as sea ice shields the coast year round. By the carbon 14 method, the beach ridges have been determined to originate from the Holocene Optimum, during which period the sea must have been ice-free, at least in the summertime.
It was concluded that sea ice reached a minimum between 8,500 and 6,000 years ago, when the limit for full-year sea ice was located 1.000 km further north than in the present, and in summertime it covered an area only half as large as the sea ice area in the summer of 2007, when sea ice in recent times had its minimum.
Some studies indicate that the sea surface temperature of the world's oceans was up to 5 degrees higher than today's surface temperature (Darby, 2001).
Painting by Ivan Shishkin and Konstantin Savitsky - Throughout most of the first part of Holocene, most of Europe, Asia and North America was covered by forest. A large part of the biosphere's carbon was tied up in the wood of the trees. Agriculture was introduced and as the trees rotted away or were burned, the atmospheric concentration of carbon increased in the form of CO2.
The ice cap in Peary Land in northern Greenland was drilled in 1977. The ice core contained distinct refrozen meltwater layers all way down to the bedrock, which indicates that it did not contain ice from the Weichsel glaciation. That is to say that the world's northernmost ice sheet melted completely away during the Holocene Optimum and was only restored when the climate became colder about 4,500 years ago.
Since less water was bound at the poles as inland ice than nowadays, the World Sea surface level at that time was 3 meters above today's sea surface level.
At the end of Maglemose hunters' period, around 8,500 before present, the climate in northern Europe had evolved into a so-called Atlantic climate. It was a mild and humid coastal climate with summer temperatures 2-3 degrees higher than today.
As sea surface level in the World Ocean rose, it caused the salty seawater to enter the Ancylus Lake (Baltic Sea), and the water in the Baltic Sea basin again became salt. The new sea is called the Littorina Sea after the saltwater snail Littorina littorea. It lasted several hundred years before the salt content reached its maximum.
As the kilometers-thick Scandinavian ice sheet began to melt, it formed the
freshwater lake, the Baltic Ice Lake. It was a cold sea with drifting
icebergs. The lake surface was higher than the sea surface of the World's oceans. Some believe that the ice-lake was emptied by a major flood disaster around year 9,600 BC, but most believe that it happened gradually.
The landscape of northern Europe was dominated by icy cold steppes and regular tundra roamed by a small number of reindeer hunters.
After the lake made connected with the World Sea, it became a brackish sea called the Yoldia sea, named after the mussel Yoldia arctica. The Yoldia Sea had connection with the world's oceans through a strait that was located where the Swedish lakes and the Gota river are today.
In the early hunter-Stone Age the tundra became vegetated of a birch forest mixed with aspen, willow, mountain ash and pine.
When Scandinavia was freed from the weight of the huge masses of ice, the land lifted, and the uplift cut off the Yoldia Sea's connection with the world's oceans, and it became once again a fresh water lake called the Ancylus Lake after the freshwater snail Ancylus fluviatilis. Ancylus Lake maybe had drain through central Sweden at the Great Lakes.
As the climate became milder, and the summer average temperature rose to 18-20 degrees, and winter temperatures rarely fall below freezing, also the composition of the forest trees changed, pine replaced birch and hazel, elm, oak, ash, alder, fir and linden became common.
Around 7,000 before present, the climate of northern Europe was a so-called Atlantic climate. It was a mild and humid coastal climate with summer temperatures 2-3 degrees higher than today. The water level in the world's oceans increased after some time making the salty sea water enter the Ancylus Lake, and the water in the Baltic Sea basin again became salt. The new sea is called the Littorina Sea after the saltwater snail Littorina littorea.
Because of land uplift the Littorina Sea's connection to the World Ocean during the past 2,000 years had become increasingly narrower and shallower, making it to the brackish sea, that we know today as the Baltic Sea.
In Australia scientists have analyzed sediments in the seabed off the mouth of the river Murray and found that from 17,000 to 13,500 years before present the Australian climate was wetter than it is in the present. There have with certainty been found no indications of dry periods either in the Younger Dryas or 8,200 years before present, indicating that these cold periods are phenomena limited to the northern hemisphere.
The partially dried up Black Sea around 5,500 before present.
Samples of bottom sediments in the Australian Lake Frome and Lake Woods
show that the climate in early Holocene between 9,500 and 8,000 years ago, and again 7,000 to 4,200 years ago, was considerably wetter than in the present. The beginning of modern climatic conditions in Australia with periodic rainy seasons took place about 4,000 years ago.
Analyses of sediments from the Cariaco Basin in Venezuela indicates that the amount of water discharged into the basin during the Holocene Optimum was much greater than today, indicating that precipitation in the area must have been much larger in the first half of the Holocene than it is today. (Uriarte, Haug).
One of the geographical events in Europe, that most brings our thoughts to the Biblical account of the Flood, is perhaps the sudden flooding of the partially dried up Black Sea, which took place 5,500 years before present.
For reasons, we can only guess, the inland sea had lost its connection to the world's oceans and was partially dried out. Its sea surface lay 150 m under the sea surface of the World Sea. The Black Sea is fed by many large and water-rich rivers, just think of the Danube, Dnester, Dneieper and Don. It is difficult to understand that it may have lost more water by evaporation than it received from the rivers. It must be evidence that it really has been very hot during the Holocene Optimum. It is assumed that the temperature during this period was 2-3 degrees higher than in present.
Detail of the motif The Flood by Michelangelo from the Sistine Chapel.
A marginal increase in the sea surface of the world's ocean 5,500 years before present created a small crack in the barrier of the Bosphorus, a negligible trickle of seawater into the Black Sea basin quickly evolved into a huge waterfall of salt water 200 times greater than Niagara. It is assumed that the sea water gushed into the half dried up Black Sea and got sea surface level to rise by 15 cm. a day, and thus raised the water level the 150 meters up to the World Ocean surface level during about three years.
When the flood occurred, there was Neolithic time in Northern Europe and this there had surely been a long time in the area around the Black Sea. The oceanographer Robert Ballard has examined the Black Sea bottom using an underwater robot and found evidence of human habitation.
Many peoples have the story of an initial flood among their old myths. In the Genesis of the Bible God separated the waters and created Heaven and Earth. The Bible has the story about the flood, and how Noah and his family survived. In the Egyptian creation myth was in the beginning also a chaos of water, and the god Ra separated the waters and created the world. In the Scandinavian mythology, the gods Odin, Vile, and Ve killed the original giant Ymer, in the flood that was created by Hrymer's blood all his children, the rimturses, drowned except for Bergelmer and his wife, and by these were the Jotuns. Even the Australian Aborigines have an initial flood among their old myths.
Left: Rock Painting with giraffes from Tassili in southern Algeria.
Right: Rock Painting with an elephant from Tadrart Acacus in Libya.
During a longer period of time, that roughly corresponds to the Holocene Optimum, Northern Africa experienced a time of a considerably more wet and rainy climate than that, which now prevails in the region. Many states the period to be 8,500 to 3,500 BC (10,500 to 5,500 years before present), but the dating seems to be uncertain.
Where now is barren and scorched desert, was then savannah with widespread grassland and some trees. There lived lions elephants, giraffes and other animals that are now characteristic of southern Africa.
The former professor of African history at London University Roland Oliver described the landscape as follows: "The major mountain ranges Tibesti and Hoggar, which today are bare rocks, were then covered with forests of oak and walnut, linden, alder and elm. The lower slopes, along with the smaller mountains - Tassili and Acacus to the north, Ennedi and Air to the south - were covered with olive, juniper and Aleppo pine. Through the grasslands of the valleys' rivers were flowing teeming with fish."
Rock Painting depicting men in boats from Tassili-n-Ajjer in southern
Rock Art all over Sahara recalls a time when the country was greener and
home to lions, elephants, giraffes, antelopes, hippos and crocodiles. A picture from Tassili, which today is a scorched desert, shows men, who stand in boats sailing on water. This shows that there existed lakes and rivers in places, where today cannot be found a straw of grass.
Most rock paintings in the Sahara are found in Algeria, Libya, Morocco and Niger, and to a lesser extent in Egypt, Sudan, Tunisia and some Sahel countries. The Air Mountains in Niger, the Tassili-n-Ajjer plateau in southeastern Algeria and the Fezzan region in southwestern Libya are particularly rich in old rock paintings.
Lake Chad reached a maximum extent of about 400,000 square kilometers, which is larger than the modern Caspian Sea, with a surface level about 30 meters higher than in modern times.
Climate-related settlements in the eastern Sahara through the major phases of the Holocene. Red dots indicate main resettlement areas, white dots indicate more isolated settlements in climatic refuge locations and cyclical shifts of pastures. Precipitation zones are indicated by green nuances in the light of best estimates based on geological, archaeological and archaeo-
(A) During last Glacial Maximum and late Pleistocene, that is 20,000 to 8,500 BC (22,000 to 10,500 years before present) the Sahara desert was devoid of any settlement outside the Nile Valley, and the desert stretched 400 km. farther south, than it does today.
(B) With the sudden onset of monsoon rain around 8,500 BC, the hyper-arid desert was replaced by savannah-like landscapes, which quickly became inhabited by prehistoric people. In the early Holocene optimum southern Sahara and the Nile Valley were apparently too humid and dangerous for appreciably human settlement.
(C) Around 7,000 BC human settlements have been well established throughout eastern Sahara, where they created a cattle-nomadic culture.
(D) Decreasing monsoon rain caused a beginning drying out of the Egyptian part of the Sahara around 5,300 BC. The prehistoric people were forced to seek into the Nile Valley, settling in oases or to emigrate to the Sudanese Sahara, where rainfall and surface water was still sufficient. Sahara's return to actual desert conditions about 3,500 BC coincided with the initial stages of Egyptian civilization in the Nile Valley. - Kuper and Kropelin (2006).
Around 3500 BC the desert again spread across North Africa, and the scattered cattle nomads moved to the Nile Valley, where they began tilling the soil, and where they created the first Dynasty and thus founded the famous Egyptian culture.
Left: Sphinx from Luxor - A sphinx is a lion with a human head.
Right: Lions that represent the god Aker
In pharaonic times, there were still lions in Egypt. They lived on the border of the desert, where they were known as the keepers of the eastern and western horizon or guardians of the eastern and western descent to the underworld. Sphinxes may depict a pharaoh as a lion figure with a human head.
There lived elephants in North Africa long after the desert had returned in the central Sahara.
North African forest elephants were somewhat smaller than both the Indian elephant and the African steppe elephant. Its Latin name sounds Loxodonta Africana Pharaoensis, and it was exterminated in the second century; reportedly many have been killed in the Roman arenas.
It puzzled Cicero that when twenty elephants, an unprecedented number, were attacked by spearmen in the arena, their trumpeting of distress so harrowed the spectators that everyone in the theater began to weep. The show was given by the great man Pompey.
The expulsion from The Garden of Eden painted by Natoire in the year 1740.
Also, the Arabian desert in the Middle East and the Rajasthan desert between India and Pakistan experienced a wet period in the first part of Holocene. In the dry out lakes of the deserts have been found spores from plants, which are characteristic of a savannah vegetation.
Other studies indicate that Central Asia in the early Holocene experienced a wetter climate than today, while summer temperatures, that was from 2 to 3.5 degrees higher than today, prevailed. In China rice could be planted almost a full month earlier than it usually is the case today. Bamboo groves could be found three latitudes farther north than they are found in modern times. (Uriarte, Chu Ko-chen)
Many peoples have old myths about an original homeland, which they left in the distant past. The ancient Doric Greeks immigrated from the north, the Scandinavian peoples remember Asgaard and Midgaard, according to their ancient myths the Romans originally came from Troy, and probably the best-known myth of this kind is the Biblical story of the expulsion from the Garden of Eden. It is quite likely that the factors, which forced the peoples to emigrate, have been associated with such climate changes that took place in connection with the end of the Holocene Optimum.
Around 5,500 to 5,000 years before present occurred the Piora cold period, which is named after the Val Piora valley in Switzerland, which was the first place, where it was identified by using pollen analysis. The more heat-loving trees as elm and linden became rarer and never again regained their dominant position in the woods. There have been found indications of this cold period in both Alaska, the Andes in Colombia and in the mountains of Kenya (Lamb).
Left: Precipitation in the Rajasthani desert. It is seen that in the period before the Holocene it was a fairly dry desert, the precipitation peaked around 6,000 years before present, and while Harappan culture existed the rainfall was 600 to 800 mm per year. This can be compared to the average annual precipitation in Denmark, which is 745 mm. (Fra H. H. Lamb: Climate, History and the Modern World).
Right: The cities of Harappa and Mohenjodaro existed in the Indus valley 4,000 years ago.
In the Indus Valley, where today Rajastan's arid Thar desert is spreading, the cities of Harappa and Mohenjodaro flourished between 4,600 and 3,900 years before present. When their civilization was at its peak, it covered an area, which was larger than the Nile valley and Mesopotamia combined. The inhabitants cultivated wheat, barley, melons, dates and perhaps cotton. On the savannah and along the now dry river lived elephants, rhinos and water buffaloes. The annual rainfall is estimated to have been between 400 and 800 mm.
In the Arabian desert has also been found evidence of human habitation from about 5,000 years before present.
In the Minoan warm period millet was grown in southern Scandinavia.
Not much is known about the Minoan warm period beyond, what can be gauged from cores from boreholes in the ice sheet. That the climate really was warmer then may be
derived from that in the Minoan warm period, which occurred during the bronze age, millet was grown in southern Scandinavia. Today Millet is grown in tropical and subtropical regions, it is an important crop in Asia, Africa and in the southern U.S. The average annual temperature in Mississippi and Alabama is about 10 degrees, which should be compared with today's average annual temperature in Denmark, which is 8 degrees. So maybe the climate in the Minoan warm period, was about 2 degrees warmer than present in southern Scandinavia.
As you may know, Rome is said to have been founded by Romulus and Remus in 753 BC The Roman historian Livy tells us that in the city's early history occurred a few severe winters when there was ice on the Tiber, and the snow stayed for many days. Before the Roman warm period beech trees is said to have been growing in mountains around Rome.
Climate changes have always taken place, it is documented in the Bible. Jeremiah 18:14 in the Old Testament says: "Does the snow of Lebanon leave the crags of Sirion? Do the mountain waters run dry, the cold flowing streams?" indicating that it was really relatively cool around the Mediterranean when Jeremiah lived around 600 BC. In our days is no eternal snow on the mountains of Lebanon.
Left: Sea ice in the Arctic Ocean - The white area represents the extent of sea ice 31. August 2007. The red line marks the average distribution of sea ice in August between the years 1979 and 2000. It is seen that in modern times there is quite a long way from Iceland to the sea ice at the Greenland coast north of Scoresbysund. Whether Pytheas landed on the Faroe Islands, Iceland and Western Norway, only a day's sailing to the solidified sea means that the sea ice in the summer had a very significantly greater extent than in the present.
Right: Pytheas' travels - From Histoire des Mares: Pytheas le massaliote.
Around 310-300 BC the Greek explorer Pytheas traveled from Massalia (Marseille) along the shores of Western Europe. He came to Scotland and
Hebrides, where he saw the waves, which were "80 cubits high" (cubit is an ancient unit of length on 45.72 cm). He sailed to the island of Thule, which was located 6 days and 6 nights sailing north of Berrice, which is assumed to be Shetland. There is uncertainty about, whether Pytheas' Thule was the Faroe Islands, Iceland or western Norway.
The distance between Shetland and Faroe Islands is 150 nautical miles, and the distance between Shetland and Iceland is about 380 nautical miles. On the journey to the Faroe Islands, he ,therefore, should have kept a speed of 1 knot, which sounds pretty manageable, also for his time. If Thule was Iceland, he should have kept a speed of 2.6 knots, which does not sound impossible with a good wind.
He describes Thule as an island located six days sailing north of Shetland, near the frozen sea. There is no night at midsummer, he says, indicating that the location must be on the Arctic Circle and that he visited the island in the summer. The frozen sea is one day's sailing north of the island, he says, which also indicates that the island must be Iceland, rather than the Faroe Islands.
Left: A statue of
Pytheas in front of the stock exchange in Marseille.
Right: A reconstruction of Pytheas ship. A vessel of this type cannot have navigated the North Atlantic in winter. It emphasizes that he visited Thule in the summertime.
However, in modern times the sea ice nearest Iceland in the summer is found north of Scorebysund on Greenland's east coast. the distance from Iceland to the north of Scoresbysund is more than 350 nautical miles. Pytheas sailed maybe 2.6 knots, so it would have taken him almost 6 days and nights to get to the frozen sea - with the extension of sea ice in modern time.
But as he wrote that the sea ice was only a day's sailing to the north of Thule, we can conclude that the sea ice in the Arctic Ocean in the summer had much bigger extension in his time, 300 BC, than it has today.
Pytheas mentioned that the island was inhabited. People lived on millet and other herbs and on fruits and roots, and where there were cereal and honey, they got their drink from it. The country was rainy and lacked sunshine, he wrote. This leads many to think that he, in fact, landed in Norway. However, if the frozen sea was only one day's sailing away, it indicates only even bigger extension of sea ice.
Fresco in Pompeii, depicting a Roman orgy - Casa dei Casti Amanti.
As seen on their light dress it must have been quite hot.
The Roman warm period started quite suddenly around 250 BC and ended about 400 AD. The ancient Greeks and Romans lived in a fairly pleasant climate, which you can also see from the airy robes, in which the antique statues are often dressed.
Some studies in a bog in Penido Vello in Spain have shown that in Roman times it was around 2-2.5 degrees warmer than in the present.
The Roman warm period is amply documented by numerous analyzes of sediments, tree rings, ice cores and pollen - especially from the northern hemisphere. Studies from China, North America, Venezuela, South Africa, Iceland, Greenland and the Sargasso Sea have all demonstrated the Roman Warm Period. Additionally, it has been documented by ancient authors and historical events.
The Roman Columella wrote in the first century after Christ in "De Re Rustica "(book 1), citing the "reliable author Saserna": "Areas (in Italy), which previously due to the regular severity of the weather could not provide any protection for vine plants or olive trees planted there, now that the former cold have subsided produce olives and wine in the greatest abundance."
Coins from Carthage with elephant motifs. Note the first coin from the left,
the man is quite large relative to the elephant, which indicates that the elephants really were relatively small.
Hannibal brought a whole army, equipped with 37 war elephants, over the Alps in 218 BC - in winter.
The ancient writer Pausanias wrote in the second century on the use of war elephants: "For although the use of ivory in arts and crafts apparently has been known from ancient times of all men, so no one had seen the actual animals, before the Macedonians invaded into Asia, except the Indians themselves, the Libyans and their neighbors." It sounds like he thinks that elephants naturally belong in both India and Libya.
Roman bridges in Syria, Jordan and Iraq. The rivers, which they led over, are
long ago dried up.
Top left: The ruins of a Roman bridge between the villages of Ayyash and Ain Abu Jima in the northwestern part of the Jebel Bishri by the river Euphrates. - Photo: Minna Lonnqvist.
Top right: Roman bridge in Uthma in Syria - photo: arminhermann.
Middle left: Roman bridge in Maharda in Syria.
Middle right: The Roman bridge Djemarin in Syria.
Bottom left: The Roman bridge Ain Diwar by Malikiyeh in Syria near the Turkish border.
Bottom right: Roman bridge that spans the Sabun Suyu, which was a tributary of Afrin in Syria.
It seems unlikely that Hannibal should have imported his 37 war elephants all the way from India to his native Spain. Therefore, most assume that they were African elephants.
The ordinary African steppe elephant is difficult to domesticate, and therefore the animals most likely belonged to a now extinct elephant species called North African forest elephant that was slightly smaller than both the Indian elephant and the common African steppe elephant.
Around the year 400 AD the Roman Symmachus complained in his letters about the duties, which he had to pay for the bears, which he imported from North Africa to use in the circus game that his son was obliged to give in connection with his entry into the senatorial order. The crocodiles, that he had managed to find, refused to eat, and he was worried that the poor animals should die from starvation before they could play their part in his son's circus game.
Locating vineyards and olive trees is also a good indicator of climate. During the culmination of the Roman warm period, olive trees grew in the Rhine Valley in Germany. Citrus trees and grapes were cultivated in England as far north as near Hadrian's Wall near Newcastle. Scientists have found olive presses in Sagalassos in the Anatolian highlands of present-day Turkey, which is an area, where it today is too cold to cultivate olives.
The continued spread of vineyards to the north can be deduced from a decree of Emperor Domitian which prohibits the cultivation of wine in the Empire's western and northern provinces beyond the Alps. The decree was 280 AD revoked by Probus, who allowed the Romans to introduce vineyard in Germany and England.
The ruined city of Petra in the Jordanian desert. It was supplied with water from a constructed channel. We must assume that once in the past when the city was built, there was a supply of water on site. As the climate became
drier, they built a channel for water, finally, people gave up and abandoned the city. It is known that in the Crusader period 1100 AC Petra was
still inhabited. King Baldwin 1. of Jerusalem stayed for a while in the
Strabo wrote that around the years 120 to 114 BC a series of storms occurred in the North Sea causing the so-called Cymbrian Flood that covered large areas along the coasts of Denmark and northern Germany with water and thereby caused the Cimbrians and Teutones migration.
North Africa was Rome's granary, which can be difficult to imagine today. But it was much more green and fertile than in the present. The city of Petra in Jordan thrived between 300 BC and 100 CE, today it is abandoned and lies far out in the Jordanian desert.
In the Roman warm period prevailed a more humid climate in North Africa and the Middle East than today. In Alexandria Claudius Ptolemy wrote a diary of the weather in 120 AD. It shows a remarkable difference from today's climate of this place. It rained every month, except August. There was thunder in all summer months and in some other months, and very hot days were most common in July and August.
Ptolemy of Alexandria also wrote about four rivers in Arabia and trade routes that previously had been used, but already in his time were impassable. In the Middle East ruins of many Roman bridges still exists that are built over rivers, which are now dry.
The Roman Warm Period ended around 350-400 AD.
Top: Vandals, Svebes and Alans crossed the frozen Rhine in 406 AD - painting by an unknown artist.
Bottom: Density of growth rings in larch trees at Zermat in the Alps. Time progresses from left to right. The vertical red line marks the year 400 AD - from "Climate History and the Modern World" by H. H. Lamb.
The Vandals crossed the frozen Rhine New Year's Eve 406 AD Kr, thus commencing the Peoples Migration time and heralded the downfall of the Western Roman Empire. The fact that the Rhine was frozen, demonstrates a completely different climate than that, which prevailed when olive trees were growing in the Rhine Valley. I do not recall that the Rhine has been frozen in modern times.
Many believe that widespread drought in central Eurasia triggered the migrations towards both China and the Roman Empire from about 300 AD to 500 AD.
Top: The Scandinavian legend about the Fimbul winter.
Bottom: Changes in the upper tree-line in two areas in the White Mountains of California and the Alps in Switzerland and Austria. It shows that the tree-line and thus the temperature largely have been declining during at least 3,000 years - The vertical red line marks 400 AD. - from "Climate History and the Modern World" by H.H. Lamb.
H. H. Lamb wrote in his "Climate, History and the Modern World": "For centuries in Roman times from about 150 BC to 300 AD or some few decades later, camel caravans used the Great Silk Road through Asia for trading in luxury goods from China. But from the fourth century AD, which we know from changes in water level in the Caspian Sea and study of irregularities in rivers, lakes and abandoned cities in Sinkiang and Central Asia, drought developed in such an extent that it stopped the traffic on this route. Other severe stages of this drought occurred between 300 AD and 800 AD, and especially around these dates, as it can be seen from old shores lines and old port structures that indicate a very low sea surface level in the Caspian Sea around these times." (page 159).
Chinese cave painting from the Mogave Caves at Dun Huang from the Northern Wei period (386-535 AD) - Some tough men - were they kings?
The drought in Eurasia thus appeared to have had two maxima, at about 300
AD and around 800 AD.
Already around 300 AD, China had problems with refugees from the steppe. "The Five Hu " peoples from the north, Xiong Nu, Xianbei, Di, Qiang and Jie took refuge in the empire behind the great wall. The mandarins ordered them to travel back to their homelands, they answered with force and created their own migration states. This began the period in Chinese history called "The Sixteen kingdoms".
The drought on the eastern steppe also drove many peoples towards the Roman Empire. From around year 400 AD Visigoths, Ostrogoths, Vandals, Alans, Svebes, Huns, Gepides, Angles, Saxons, Franks, Jutes, Alemanns, Burgunds and Langobards invaded the empire. Later attacked Avars, Magyars, Arabs, Vikings and Wends. The political divisions of modern Europe is mainly a result of the showdown among all these peoples.
Ragnarok - In the Scandinavian mythology is told about the Fimbul Winter that will herald the Ragnarok-battle that is the end of the world.
In the first part of Snorri's Edda (55), Gylfaginning, is told: "Then said Ganglere: What tidings are to be told of Ragnarok? Of this, I have never heard before. Har answered: Great things are to be said thereof. First, there is a winter called the Fimbul-winter, when snow drives from all quarters, the frosts are so severe, the winds so sharp and piercing, that there is no joy in the sun. There are three such winters in succession, without any intervening summer. But before these, there are three other winters, during which great wars rage all over the world. Brothers slay each other for the sake of gain, and no one spares his father or mother in that manslaughter and adultery."
The Byzantine historian Procopius recorded of 536 AD, in his report on the Vandal war: "during this year a most dread portent took place. For the sun gave forth its light without brightness - and it seemed exceedingly like the sun in eclipse, for the beams it shed were not clear. From the moment the phenomenon showed up, humans all time were affected by war, famine and other deadly things." His fellow byzantian Lydus wrote: "The sun became weak - almost a full year - so that the fruits died without harvest."
Michael, the Patriarch of Antioch in Syria (1126-1199 AD), wrote about the year 536 AD: "The sun became dark and the eclipse lasted for 18 months."
The Irish Annals of Ulster recorded: "A shortage of bread in the year 536 AD." The Annals of Inisfallen wrote: "A shortage of bread in the years 536-539 AD". In China was reported from these years that snow fell in August.
Gregory of Tours wrote in "History of the Franks" (Book 3:37) from the years 539 to 594 AD: "In this year the winter was terrible and more rigorous than usual, so that the rivers were kept in the iron grip of the frost and made into a road for the people like it was dry land. Birds, too, were affected by cold and hunger, and were captured by hand without using the snare, when the snow was deep."
Extent of sea ice in the Arctic Ocean in winter - The yellow line shows the
average spread of ice in January, from 1979 to 2000. The white area represents the distribution in 2011.
This hard winter in the Loire Valley has been dated by Gregory as the year when Theodobert died. This happened 37 years after the death of Clovis, who was king of the Franks in the years 465-511 AD. The Year of the harsh winter is then 548 AD. This suggests that the severe winters in Europe from the year 536 AD may had stretched at least to 548 AD (Most about year 536 AD is from Flemming Rickfors - see link below).
In Jaeren in Norway, large areas were abandoned as farmland around year 500 AD, which indicates a colder and harsher climate. Studies of peat bogs in Jutland show evidence of shifting sand from around the same time. (Lamb)
The Irish monk and geographer Dicuil wrote the book "The Mensura Orbis Terrae" that became known at the Carolingian court in the year 825 AD. He described islands in the ocean that previously were inhabited by hermits, who now, however, have been displaced by Vikings. He provides a description from monks, who had lived in "Thule" until year 765 AD. They had experienced the frozen sea, which was located one day's sailing to the north. They told of Thule that "there was no darkness to prevent any from doing, what they wanted to do". Their description of the sun's path, as well as the temperature, fits perfectly on Iceland.
Reconstruction of an Irish curragh from Tim Severin's book "The Brendan Voyage" - To prove that Sct. Brendan really could sail from Ireland via the Faroe Islands, Iceland and Greenland to Labrador, Tim Severin built a curragh and carried out the journey. - It is obvious that such a vessel will not be able to cope with a winter in the North Atlantic. The legend of St. Brendan's journey contains no information on climate.
We must think that he believes "one day sailing" in the summer. It must have been altogether impossible for the Irish monks to navigate the North Atlantic in winter in their small vessels that may have resembled the traditional leather-clad Irish curragh. This indicates that the extent of sea ice has been considerably bigger around 700-800 AD than it is today.
Monastery annals tell of increasing severe winters. Thus the winter 763-64 AD in many places in Europe was described as a winter with a huge snowfall and heavy losses of olive and fig trees in southern Europe.
Also, the winter 858-60 AD was clearly unusual severe. There was ice on the Strait of Dardanelles, and the ice on the Adriatic Sea near Venice was thick enough to support fully loaded wagons.
Also around 860 AD, the Norwegian Floki Vilgerdason navigated the waters around Iceland as the first Scandinavian. When he visited the northern Arnarfjord, he found it packed with ice; which indicates that the climate was considerably colder than nowadays, also in northern countries.
Heat and cold periods through 2000 years compiled on the basis of the density of the growth-rings of pine trees in northern Scandinavia from the period 138 BC to 2006 AD - The blue line is the actual measurements. The red
graph is result of a mathematical smoothing with a 100-year rolling average. The dotted line above and below the red temperature graph is representing uncertainty. The red dotted line at the top shows the general trend, namely that the Middle Ages were warmer than today, and that the Roman era was warmer than the Middle Ages. The vertical gray fields represent selected 30-year periods. The temperature scale to the left shows deviation from a mean temperature over the period 1951-1980. JJA means: June, July, August. Following data from Jan Esper, Ulf Buntgen, Mauri Timonen and David C. Frank "Variability and extremes of northern Scandinavian summer temperatures over the past two millennia." from 2012.
Advanced and accurate measurements of density of the growth rings in Northern Scandinavian pine trees have formed the basis for a highly accurate modern reconstruction of temperatures over the past 2,000 years. It shows that today's warm period is colder than the medieval warming, which again was colder than the Roman era. In modern times, there have been some particularly warm years, such as 2004, but they will be much less visible after a mathematical smoothing. Probably there have always been few years with exceptional heat or cold. It also follows from the historical accounts above about particular severe winters.
Sea surface temperature in the East China Sea (between Japan, Taiwan and China). It is seen that changes in temperature did not happen simultaneously over the whole Earth. The Roman Warm Period took place also in China, the cold spell of the Peoples migration period was significant, but not very long lasting, instead, it was replaced by the Sui-Tang heating period. The Medieval warm period was not particularly significant in East Asia and nor was the Little Ice Age. But the steadily falling temperature trend has been the same in China as around the Atlantic.
Jan Esper and his co-authors to "Variability and extremes of northern Scandinavian - - " conclude that their results "provide evidence of
considerable warming during the Roman and Medieval warm period in larger scale and of longer duration than the twentieth century heating period." More specifically they identify the Medieval Warm Period to has taken place around 700 to 1300 AD and identify the warmest 30-year intervals during this period, which occurred from 918 to 947 AD in which period the June, July and August temperatures were about 0.3 degrees hotter than the hottest 30-year interval in the current warm period. Their findings differ from other researchers, who think that the Medieval Heating period began around 950 AD.
Norse settlements in Greenland. Eastern Settlement, M stands
for Middle Settlement and V stands for western settlement. When the Norse settlements were on its peak, it is estimated from the size and number of farms that there have been between 3,000 and 5,000 Scandinavians in Greenland, which roughly corresponds to the number of inhabitants in Copenhagen at the same time.
In North America, there seems to have been a relatively warm and humid period between 700 AD and 1,200 AD, where maize cultivation spread along Mississippi up to Minnesota. In the waste piles of the past in Iowa archaeologists have found remains of bones of elk and deer, which are woodland animals, but after 1200 AD, they were rather abruptly replaced by bones of bison, which is a steppe animal, indicating a change to a drier climate.
Pollen analyzes from Lake Chad in Africa conducted by J. Maley from the Languedoc University in France shows a maximum of pollen from water-intensive plants in the period 700-1,200 AD, and that the water consuming plants gradually disappeared during the period 1,300 to 1,500 AD (Lamb).
The Medieval warming did not occur simultaneously across the Earth. In East Asia, it was partially replaced by the Sui-Tang heating period that occurred between about 500 and 800 years AD. The medieval warmth was most noticeable around the North Atlantic, but even in Antarctica, it can be traced.
The story of the Scandinavian settlements in Greenland is a good illustration of the Medieval warm period.
In 986 Erik the Red sailed to Greenland with 35 ships. Only 14 ships reached the destination, some sank, and others returned to Iceland. Most of the remaining 14 ships sailed into the fjords to the south around Julianehaab and founded the Eastern Settlement, others sailed a little further north and founded the small Middle Settlement around the present Ivigtut, and some sailed all way up to Godthaab fjord and founded the Western Settlement.
The Greenlanders built farms, houses and churches. There was both a monastery and a nunnery.
In Norse farmers' manure heaps, archaeologists have found big quantities of fish bones from cod. It shows that it generally has been warmer than in the present because, throughout, the 1900s, it has not been possible to fish for cod in Greenland waters; it's been too cold.
The Greenlander Thorkell Farserk was a cousin of Eric the Red. Once when he awaited Erik to visit, he would fetch a sheep that grazed on the island Hvalsey in Hvalseyjarfjord. Since he accidentally did not have a boat available, he swam out to the island, got hold of sheep and swam back again, that he could entertain his cousin.
The distance to the island is slightly more than 3.2 kilometers. Dr. Pugh from the Medical Research Laboratories has given his assessment of this achievement to H. H. Lamb. From studies of Channel swimmers' endurance, we know that 10 degrees will be the absolute lowest water temperature, which allowed an experienced swimmer to swim this distance. In modern times, the water temperature in Hvalseyjarfjord usually is in the range of 3-6 degrees in the summer. Therefore, the water at that time must have been at least 4 degrees warmer than today.
In the Norwegian medieval document Kongespejlet (Kings Mirror) from about 1,250 AD is told about the sailing route to Greenland: "As soon as the great ocean has been crossed, there is such an abundance of ice that nothing like this is known from any other place in the whole world, and it is so far from land that there are no less than four or more days to travel to there over the ice, but this ice is more to the North-East and North off the land than from the south-west and West."
This must mean that when you sailed due west from Iceland in the summer, which was the original route, you would meet sea ice along the coast of Greenland. Nowadays there is usually no sea ice as far south in the summer. But maybe they sailed very early in the year.
The sailing routes of the Vikings to Iceland, Greenland and America. - From McGovern and Perdikaris, 2000.
About Greenland is further narrated in Kongespejlet: "But since you asked, if the country was free of ice or not, or it was covered with ice like the ocean, then you should know for sure that it is a small part of the country that is free of ice, but all the rest is covered with it, and people do not know, whether the country is large or small, because all mountain areas and also all valleys are hidden by the ice, so that you nowhere find opening on it." - "Few in number is the people in this country because there are few places, which are so ice-free that they are habitable" - "But since you ask about on what people are living in that country, since they do not sow
grain, you must know that there yet exist many other countries, where the people do not sow, and though people are living in them, because humans do not live by bread alone. About Greenland is the saying is that there are good pastures and both good and large farms because they have much cattle and many sheep and produce much butter and cheese. On this the inhabitants have their living since for the most part, also meat and all kinds of hunting prey, both reindeer-, whale-, white bear- and seal-meat, On these things the Greenlanders have their living." - "But since you asked if the sun shines in Greenland, or whether it could ever happen that it was beautiful weather as in other countries, then you must know for sure that there can be beautiful sunshine and that the country most of the time at summertime can be called weather-good."
Reconstructed Viking house at L'Anse aux Meadows in Newfoundland" - After
excavation of 2,400 Viking objects, there is no doubt that the Vikings
discovered America long before Columbus.
"But there is a big difference in the movements of the sun because as soon as it becomes winter, it is almost always night, but as soon as it is summer, it is almost all the time day. And when the sun goes highest, it has sufficient force for shine and brightness, but only little to warming and heat; however, it has so much force that where the ground is free of ice, it heats so much that it can provide good and fragrant grass; therefore, people can quite well live in the country, where it is thawed, but that is indeed very little." - "When it is stormy weather it happens with greater rigor there than in most other places, both in terms of the power of the storms and the violence of frost and snow."
But, it seems that the ancient document Kongespejlet was not completely well informed. Danish researchers from the National Museum have found small pieces of charred barley in Greenland Viking middens. The finding proofs that the Norse Greenlanders actually cultivated barley. They were able to produce the important main ingredient that is needed to brew beer.
"If the grain had been imported, it would have been threshed, so when we find parts not threshed, it is a very strong indication that the first Norsemen in Greenland cultivated their own grain", project leader Peter Steen Henriksen said. "One must assume, that if the grain had arrived in Greenland with a ship, it had been threshed first, otherwise it would take up too much space."
Visible ditches and furrows from medieval fields at Redesdale in Northumberland in England 300-320 meters above sea surface - From H. H. Lamb
"Climate, History and the Modern World"
Another good indicator of climate is the spread of wineyards. When Wilhelm the Conqueror in 1086 AD prepared Doomsday Book, he recorded
Wineries in 46 locations in southern England, from East Anglia to the modern
Somerset. Today there are 400 English wine producers, but it must be seen in
the light of the fact that modern wine-growers have developed wines that are more cold tolerant than the historical varieties. H. H. Lamb concludes that the medieval average summer temperature probably was 0.7 to 1.0 degrees higher than today, and the climate must have been less prone to frost in May.
At the abandoned village Houndtor in 400 meters above sea surface level in the highlands of Dartmoor in the county of Devon in England and at Redesdale in Northumberland near the border with Scotland 300-320 meters above sea surface level still exists visible traces of cultivated fields from the Middle Ages (H. H. Lamb). In this altitude, grain cannot be cultivated in modern times.
In medieval York archaeologists have found the presence of a species of the beetle "heterogaster urtica" that today lives only on nettles in sunny places in Southern England, which also indicates that Middle Ages was warmer than the present (Lamb).
In the Middle Ages two of the Sicilian rivers, Erminio and San Leonardo, were described as navigable, which today is quite impossible even with small vessels. This shows that precipitation was bigger in the warm climate of the medieval.
The Rhone Glacier in north-eastern Switzerland on a postcard from 1870
compared to reality in 2006.
The development of this glacier has been very visible, as it can be seen from the nearby small town. During the last 120 years, the glacier has pulled about 1,300 meters back and left a trail of bare stone.
There is no consensus on when the Little Ice Age started and ended, but let
us stick to that the Medieval warming period ended and the Little Ice Age began around the year 1300 AD, as proposed by Jan Esper and his co-authors above. During the Little Ice Age winters were alternately mild and very cold just like today, but generally it was colder - and it became really cold around 1690 AD that may be designated as the culmination of the little ice age. Also around 1660 and 1770 AD winters were extraordinary cold. The winter of 1850 AD was also very cold, but then the temperature rose, and one can say that the Little Ice Age ended and the modern warm period began.
The cold weather came earlier at high latitudes than in southern Europe. In Greenland and in northern and northeastern parts of Iceland growing of barley was abandoned as early as about 1300 AD, while growing of wine in southern England and Northern France and cultivation of oranges in Provence of southern France were first abandoned about 1500 AD.
Eight time series of glacier advances and withdrawals through the Holocene
during the last 6000 years. Most are not continuous, because they are based on morphological evidences as moraines, U-shaped glacier valleys etc. The Scandinavian graphs are continuous. To make it easier to compare, they are all presented as graphs. The graphs of the Alps are best supported by growth rings in trees and other documentation for the past 3,500 years. Except for Scandinavia and the Alps the exact times for glacial retreats are not known, and they are shown somewhat arbitrarily. The brown areas show, where the graphs have been concluded from indirect evidence such as residues of trees above the modern tree-line, buried topsoil etc.
When the graph is above the corresponding horizontal line, it indicates the warming and smaller glaciers than today, and when the graph is below the line it indicates cooling, and glaciers bigger than today. Note that the names of the mountains are placed at the start of the graph itself, and not at the connected horizontal line. All graphs end on the corresponding horizontal line, which represents the present.
Sources of information:
Franz Josef Land (An archipelago in the Arctic Ocean): Lubinsky et al., 1999,
Spitsbergen (An island in the Arctic Ocean, also known as Svalbard): Svendsen and Mangerud (1997) and Humlum et al. (2005).
Northern Scandinavia: Nesje et al. (2005), Bakke et al. (2005), IPCC (2007).
Southern Scandinavia: Matthews et al. (2000,2005), Lie et al. (2004), IPCC (2007).
Alps: Holzhauser et al. (2005),Jo'rin et al. (2006).
Brooks Range (In the northern Alaska): Ellis and Calkin (1984).
Western Cordillera-North America: Koch and Clague (2006).
Western Cordillera-South America: Koch and Clague (2006).
Glaciers of the Northern Hemisphere have typically been smaller in the past, and then they have grown down through the Holocene to a maximum during the Little Ice Age, after which they have pulled back a little.
From: "Mid- to Late Holocene climate change: an overview" by Heinz Wanner, Jurg Beer, Jonathan Butikofer with others.
The Little Ice Age seems to have been most noticeable in Europe and North America; but it could also be felt in China, Alaska, Caucasus and Himalayas; everywhere the glaciers increased. Only South America seems to have been quite unaffected.
Cultivation of heat demanding fruit trees, such as mandarins and oranges had to be abandoned in China's southern Jiangxi province, where these varieties before had been cultivated for hundreds of years. In the first half of the 1600s, China was plagued by droughts and floods. Unable to pay their taxes to the Ming emperors the peasants made revolt and thus paved the way for the Manchu conquest of China and the subsequent Qing dynasty.
The Western Settlement in the Godthaab fjord in Greenland was abandoned already around 1350 AD in the beginning of the cooling period.
Icelandic sagas tell that 1350 AD the bishop of the Eastern Settlement was reported that the Western Settlement was in need of assistance to drive away the aggressive Inuits, whom the Norsemen called Skrellings. The bishop sent a church envoy, Ivar Bardson, to the rescue. But when he arrived to the Western Settlement, he found the country deserted except for a few loose livestock. Taking the Icelandic sagas literally, it was more Skrellings than it was the worsening climate that caused the end of the western settlement. Apparently, the meeting between Inuits and Norsemen did not always process peaceful.
The western settlement covered very close to what today is Nuuk (Godthaab) municipality. The red dots indicate farms.
In 1723 the Norwegian explorer and missionary Hans Egede visited Godthaab district, and he asked the Inuits in Ujaragssuit near the ruins of the Western Settlements church, if they had destroyed it. They replied no and told that Qavdlunak (Norsemen) did it themselves, before they departed.
Ivar Bardsson lived in Greenland from 1341 to 1364. He wrote on navigation from Iceland to Greenland: "From Snefelsness in Iceland to Greenland by the shortest route, two days and three nights. To be sailed due west. - In the sea there is a reef called Gunbjornsskaer. It was the old route, but now comes the ice from the north, so close to the reefs that no one can sail the old route without risking his life."
The last certain report we have about the Eastern Settlement is from some Icelandic travelers' account of a wedding in Hvalsey church: "Thousands and four hundred eight years after our Lord Jesus Christ's birth we were present, saw and heard in Hvalsoy in Greenland that Sigrid Bjornsdatter were married with Thorstein Olafson." Thus it is written in the laconically Icelandic sources that tell of Norse life in Greenland. The wedding took place - the first Sunday after Cross Fair - 14. September 1408 AD.
Excavation of Norse grave in Vatnahverfi in Julianeh�b district.
In 1492 Pope Alexander VI expressed, however, his anxiety about the situation in Christendom's northern outpost: "The Church in Garda lies at the end of the world in Greenland, and the people, who dwell there, are accustomed to live on dried fish and milk due to the lack of bread, wine and oil - shipping to that country is very irregular because of widespread ice on the water - no ship has called their shores for eighty years, it is believed - or if travel takes place, it is thought, only in August - and it is also said that no bishop or priest has had office in eighty years or so." - "The people of Greenland have been abandoned by the church so long that they have returned to the "pagan practice", wrote the Pope as he offered the Benedictine monk Matthias Knutson the position as bishop of Gardar, if he would be willing to travel there and lead the people back to Christianity.
Carbon-14 analysis from bones taken from Norse cemeteries in Greenland, suggest that the Eastern Settlement existed until about the year 1500 AD.
During the "Little Ice Age" in Europe flourished life on frozen canals in the Netherlands. People were skating on the ice and shopped in market stalls, which was established on the ice. It was a popular motif for several painters - Painting by Francis G. Maye.
There are told many Eskimo legends about battles between Norsemen and Inuits. They are about Norsemen, who kill many Inuits, and Inuits, who kill Norsemen. Hans Egede made many trips along the coast in search of the missing Norsemen. When he 1723 came to Julianehaab district, it seemed to him that the Inuits there were "quite beautiful and white" as opposed to those, whom he had previously met. His son Niels Egede was the first to learn the Inuit language. To him the Inuits told that the Norsemen were attacked by pirates, and their women and children fled to the Inuits. When they returned all the Norse houses had been burned down, and the men have been killed. The Inuits then went deeper into the fjord and married the Norse women.
A ship commanded by John "Greenlander" was in 1540 traveling from Hamburg to Iceland, but was blown off course by a storm. The crew went ashore in Greenland at the Eastern Settlement. They found a settlement that looked like those in Iceland, but the buildings were empty apart from the body of an old man dressed in leather with a cap of cloth lying on the floor in a house with a worn knife in his hand.
Temperature and humidity near Bern and Zurich in Switzerland as an average for each decade from around 1520 to 1820 AD - The solid line represent temperature and the dotted line represent humidity - It is seen that the Little Ice Age peaked around 1690. In addition, it can be seen that there has been widespread frost in the spring months, March, April, May. Even in the summer months, there has been freezing weather - Prepared by Dr. Christian Pfister from the Geographical Institute at the University of Bern - From "Climate, History and the Modern World" by H. H. Lamb.
When Hans Egede in 1741 after his stay in Greenland lived in Copenhagen, he wrote about a monk of Greenlandic blood: "At a German Autorem named Dithmarum Blefkenium I otherwise found an account about a monk, who was supposed to have been born in Greenland, with the bishop from that same place Anno 1645 (presumably it must be 1545) and should have travelled to Norway and since lived in Iceland l546, where he, according to his report, personally should have spoken to him. This same monk should have told strange and curious things about a Dominican Monastery in Greenland called Sct. Thomas Monastery, into which he in his childhood had been submitted by his parents with the intention that he there should become monk."
Also another source, which Hans Egede mentions, talks about this monk. That is a Danish sea captain named Jacob Hall, who had also met the Greenlandic monk and described him as follows: "He had a wide face, and his color was brown".
Around 1623-25 AD Bjorn Jonsson from Skardsa in Iceland reported that he had found pieces of wreckage on the beach, which were typical for ships built in Greenland.
In 1529 a huge Turkish army under Sultan Suleyman had to withdraw from a siege of Vienna due to poor military results, cold rain and heavy snow as early as October.
The Turkish siege of Vienna in 1529 - Due to cold rain and heavy snow
already in October the Turks had to withdraw from a siege of Vienna in the
middle of the month. They lost many soldiers and equipment during the retreat to Constantinople through snow and mud.
According to a weather diary from Zurich from the period 1546-1576 AD the
frequency of snowfall increased 44% in the first part of the period until 1563, and increased further by 63% in the last part to the year 1576 AD (Lamb).
Tycho Brahe's observations in Denmark from 1582 to 1597 indicate a winter temperature, that was 1.5 degree below the average temperature of the period 1880 to 1930 AD. Furthermore, Tycho Brahe's observations tell that wind from the east was dominant. In his notes southeast was the most frequent wind direction (Lamb).
A priest in eastern Iceland named Olafur Einarsson wrote in the early 1600s a poem that illustrates the Icelanders' problems:
Formerly the earth produced all sorts
of fruit, plants and roots.
But now almost nothing grows -
Then the floods, the lakes and the blue waves
Brought abundant fish.
But now hardly one can be seen.
The misery increases more.
The same applies to other goods -
Frost and cold torment people
The good years are rare.
If everything should be put in a verse
Only a few take care of the miserables -
The Battle of Tybrind Vig on 30. January 1658. The Swedish king Karl 10. Gustav marched over the ice on the strait of Lille Belt with 10,000 men at night and caught the Danes completely by surprise. Painting by Johan Philip Lemke.
In winter of 1657-1658 war broke out between Denmark-Norway and Sweden. By this time the Swedish King led war in Poland. The winter proved to be unusually cold, and all the Danish waters became completely covered by ice. By the announcement of the Danish declaration of war, the Swedish King Carl Gustav lost any interest in Poland and turned immediately against Denmark. He led his ten thousand men with horses and a few cannons over the ice from island to island, and soon they showed up in front of the walls of Copenhagen. At the same time, the Danes were unable to use their fleet because of the ice. Denmark-Norway was completely unprepared for this development, and King Frederik 3 asked for negotiations. At the Peace of Roskilde, Denmark-Norway had to submit Scania, Blekinge, Bornholm and the Norwegian provinces Bohuslen and Trondheim Province.
Around 1580 AD, the Denmark Strait between Greenland and Iceland in several summers was completely blocked by pack ice. In the winter of 1695 AD Iceland was completely surrounded by sea ice.
Already about 1615 AD, the Faroe cod fishing began to fail, and through the thirty years under the Little Ice Age climax 1675-1704 AD, there were absolutely no cod in Faroe waters. A recent Danish study has shown that cod can thrive in many different temperatures, but they seem to prefer temperatures between 1 and 8 degrees when they breed. Maybe water temperatures in the North Atlantic were too low. Through most of modern time, it has not been possible to fish for cod in the waters around Greenland, probably for the same reason.
The Medieval warming and the Little Ice Age. - From the BBC documentary: "The Great Global Warming Swindle".
In Norway new small glaciers formed in the mountains of Hardanger during the maximum of the Little Ice Age. Between 1690 and 1710 AD there were numerous cases in Norway about farms that were destroyed by advancing glaciers. Thus the Nigard glacier advanced 3 km between 1710 and 1743 AD and destroyed thereby a farm named Nigard. The owner sent a letter to King Frederick 5, in which he asked for compensation for the destruction.
Also in North America, the winters were very cold and long. The inhabitants of the small English colony of Jamestown, which had been founded in 1607 on the coast of Virginia in the current U.S., complained about unusual long and cold winters. Quebec's founder Samuel Champlain noted that in June in the year 1608 AD there was ice that could bear at the shores of Lake Superior.
There is no consensus on when the Little Ice Age ended, and the Modern
Warming Period began, but I will stick to around 1850 AD, as proposed
by Jan Esper, Ulf Bontgen with others above.
The global temperature in the modern warm period - The red dotted line is results from GISS, which is Goddard Institute for Space Studies at NASA. The dotted green line are the results from the UK Met Office Hadley Centre compiled by the Climatic Research Unit from the University of East Anglia. The solid red line is the results from the Goddard Institute for Space Studies at NASA's results, which are mathematically smoothed with a 10-year rolling average. The solid green line is the results from the UK Met Office Hadley Centre, which are mathematically smoothed with a 10-year rolling
average. The horizontal 0.0 line represents the average of temperatures from the period 1850 - 1899 (UK Met Office Hadley Centre) and 1880-1899 (NASA GISS). - The graph is from the European Environment Agency. The upward trend of the solid lines since 1998 must have been caused by some mathematical subtleties, as the temperature did not increase since 1998, which is also shown by the dotted lines.
Since 1850, the Earth's average global temperature has increased by around 0.8 degrees compared to the average of the period between 1850 and 1899. Europe has warmed 1.2 degrees, which is more than the global average.
It started cold. In January and February 1864 AD, when the Danish soldiers waited behind the ancient defence-dike Dannevirke on the the Prussian and Austrian armies, both the wide marshes of western Schleswig, that should protect their right flank, and the fjord Slien, that should protect their left flank, was completely frozen, and precisely therefore, they made an organized retreat to the Dybboel redoubts to avoid encirclement. It also seems that it has been pretty cold in the trenches of Flanders during the first World War.
Top: Painting by Nils Simonsen "Episode of the retreat from Dannevirke
on 5 - 6. February 1864 "painted in 1864.
Bottom: Two young women from Malmo in front of the frozen Oresund in 1924.
There was occasionally ice on the Thames. In 1924 the inner Danish waters were completely frozen, and one could walk from Scania to Copenhagen.
Some of the older generations of Danes can probably remember that their family in the countryside had a sleigh standing in a dusty corner of the barn. At the beginning of the twentieth century it was as a matter of course that there was snow in winter, and when you wanted to go somewhere, you simply harnessed the horses for the sleigh. Perhaps they were used for the last time during the severe winters in the 1940's.
Boise city in Oklahoma USA was 15. April 1935 hit by a giant dust storm that blew the upper topsoil layer of the fields. This great disaster that struck the American Midwest, is called "the Dust Bowl".
At the end of the 1800's the Western Prairie was the last land in North America that became cultivated. The settlers had a few good years, and then the problems began with locusts and drought. The "Dust Bowl" of the 1930's was caused by degraded soil and years without rain, entire fields took off to the air in big black dust storms that could blow for days. "There was dust everywhere. It came into the houses, in the food and between the teeth" a settler told. Hundreds of thousands of people loaded the few possessions, they had and fled to California - as in Steinbeck's "The Grapes of Wrath".
Also in China, very large areas of newly reclaimed land in the provinces of Shaanxi and Inner Mongolia were reduced to desert in the first half of the 1900's.
From 1915 and throughout the interwar period the temperature increased, and by the end of the Second World War, the average temperature has risen about one and a half degrees since 1850. This warming period started before automobiles, airplanes and other CO2 emitting vehicles were invented at all, and the man-made CO2 emission was negligible.
Top: Ice winter in 1956 at Dalum near Odense - Denmark.
Bottom: Emissions of CO2 from fossil sources from the year 1800 to 2000 AD. - from Wikipedia.
During the industrial boom in 1950-1980 the industry bloomed as never before, it supplied cars, refrigerators, airplanes and all kinds of consumer goods. The most of anthropogenic CO2 has been emitted precisely in this
period. Supporters of the theory of anthropogenic global warming believe that CO2 emissions have made the global temperature to go up. But nevertheless, in the period, where the industrial boom took place, the temperature dropped over four decades and Northern Europe and North America experienced again winters with lots of snow.
Icebreaker in Store Belt of Denmark in the winter 1981-82 - Foto Ove Hesstrup Hansen.
The Ice winter of 1981-82 was the coldest that ever has been measured in
Denmark. The winter already started at 7. December and 17. December was measured minus 25.6 degrees Celsius in Jutland - during the daytime.
First, during the economic crisis in the late eighties temperature again began to rise.
One-third of all human CO2 emissions have taken place since 1998. However, the global temperatures have not increased in that same period.
The theory of anthropogenic greenhouse gases that cause global warming assumes that the long-wave heat radiation from the heated earth becomes blocked by CO2, which acts as the glass in a greenhouse, and thereby prevents heat to radiate back into space. The long-wave outgoing heat radiation from the greenhouse interior is stopped by the glass, which converts the energy of the radiation into heat in the glass.
Top: A locomotive is dug free of snow on the Roskilde railway - Denmark - in winter 1942.
Bottom: Ice packs on the Oeresund at Charlottenlund near Copenhagen in the winter of 1987.
Therefore, scientists suppose that if the theory of anthropogenic global warming because of the greenhouse effect is true, then we should be able to measure a warming in the atmosphere. In the same way that the long wave-length outgoing heat radiation deliver its energy as heat to the glass in the greenhouse, thus the long-wave outgoing heat radiation from the Earth surface and the clouds should deliver some of its energy as heat to the atmosphere, since it is here, it will be stopped by CO2, it is said. Scientists have figured out that most heat should be found in an altitude of about 10 km. But whether scientists measure with weather balloons or satellites, there can be detected no warming in this altitude, on the contrary. It is clear that the heating takes place at the surface of the Earth and not in the atmosphere - suggesting that the idea of man-made global warming due to CO2 emissions is not true.
It is true that both atmospheric CO2 and global temperature has increased in the modern warming period; But the warming does not match the theory, it has occurred the wrong place at the wrong times.
The sun as it looked like on 8. June 2013 - from spaceweather.com - For a very long time no sunspots had appeared and experts had begun to fear of a new Little Ice Age. But now, finally, some small spots showed up.
On the Sun one can see some dark spots which are called sunspots. They are typical of the size of Earth, which means between 4,000 and 50,000 kilometers in diameter. The spots are about 1,000 degrees Celsius cooler than the rest of the sun's surface, which is approx. 5,750 degrees. Contrary to what one might believe, the sun radiates most energy when there are many sunspots, because simultaneously with the spots warmer areas occur around them, which more than compensate for the lower temperature of the spots.
Sunspots were first described by the Greek philosopher Teofrastos from Lesbos in 300 BC. He saw some strange black spots on the sun's surface. There are also earlier reports from China, which tell us that sunspots have been seen with the naked eye; it is sometimes possible - with caution when the sun is low in the sky and blurred by haze. Galileo directed his telescope towards the sun in 1613, observed, described sunspots systematic and issued a "Letter on sun-spots".
Some sunspots grow very large lasting several months, others become only a few hundred square kilometers big and disappear within a few days. Since the mid-1800's we have known that the number of sunspots varies with a period of 11 years so that every 11 years there will occur maximum sunspots.
The daily number of sunspots since 1900 from the Solar Influences Data Analysis
Centre (SIDC). Note the 11-year cycle. We are just now in 2013 about maximum sunspot activity of period of 24, which run from around 2009 to around 2020. Maximum sunspot activity should occur about 2014-15, but the number of daily sunspots is never the less still quite low.
Systematic counts of sunspots have been routine since Galilei invented the telescope. During the Little Ice Age the astronomer Cassini from Paris reported in 1671 that he had found a sunspot, and it was the first one that he had seen in many years. The Englishman Edward Maunder studied ancient records on sunspots, and came to the conclusion that during the Little Ice Age there were virtually no sun-spots. The period was named the Maunder minimum.
It is such that, when there are many sunspots, the solar magnetic field will be strong and deflect much cosmic radiation directed toward the Earth. When there are no or only a few sunspots, the Sun's magnetic field will be weak allowing more cosmic radiation to hit the Earth.
Top: Carbon-14 and Beryllium-10 are created when cosmic rays enter the atmosphere.
Bottom: Solar activity through 1.000 years as revealed by carbon-14 analysis. Oort minimum refers to a minor cold period in the Medieval heating period, note also that Wolf and Spurs minimum occurred in the Little Ice Age. The graph ends around all-time maximum in 1950. The activity has since then decreased significantly.
When cosmic rays hit Earth's atmosphere new isotopes are generated, especially carbon-14 and beryllium-10. When the cosmic radiation is strong, there will be formed many of these isotopes, and when the cosmic radiation is weak, there will not be formed as many. Both carbon-14 and beryllium-10 are unstable isotopes that decay over a very long time.
By analyzing historical records, carbon-14 content of trees growth rings and beryllium-10 content in ice cores from the ice caps, scientists have been able to reconstruct past levels of cosmic radiation and identify other periods, when the Sun's activity and its magnetic field has been weak, such as the Oort Minimum, Wolf Minimum, Sporer Minimum and of course Maunders Minimum.
The radiation from the sun has an intensity of 1,370 W/m2 on an imaginary surface perpendicular to the line between the Sun and Earth located above the atmosphere at the equator.
There is a correlation between the number of sun-spots and the intensity of the solar radiation that reaches Earth. When there are many sunspots the suns radiation will be stronger. Currently, the radiation from the sun has a strength of 1,370 W/m2 on an imaginary surface perpendicular to the line between the Sun and Earth located above the atmosphere at the equator. Solar irradiance on this surface oscillates with an amplitude of 1.2 W/m2 between the maximum and minimum number of sunspots. That is only 0.09% of the total radiation - It will not at all be noticeable!
But, however, there are very powerful amplification mechanisms.
The sun has a magnetic field which is several thousand times stronger than Earth's. Currently, the Sun field is about 2,000 gauss, which should be compared to Earth's field that is 1 gauss. It stretches far into space, entirely out beyond Pluto's orbit. Since 1990, the solar magnetic field has decreased from 2,700 gauss to the current about 2,000 gauss. Sunspots are regions on the Sun with intense magnetic activity. Many sunspots are a sign that the solar magnetic field is strong and few sunspots mean that the field is less strong.
Eigil Friis-Christensen and Henrik Svensmark found a very close correlation
between Sun's magnetic activity and Earth's temperature - From the
documentary "The Cloud Mystery".
The sun is a star in the Milky Way Galaxy, which contains at least 100
billion other stars. Some stars explode as super novaes thereby emitting particles, may be electrons, protons, neutrons, or ionized atomic nuclei, which enter Earth's atmosphere - sometimes with near the speed of light. Earth and the solar system are thus constantly exposed to cosmic radiation.
But only some of the cosmic rays hit the Earth; a big part is deflected by the Sun's strong magnetic field. When there are many sunspots, and the Sun's magnetic field is strong, it will deflect much radiation and the cosmic radiation, which enters the atmosphere will be weak. But a lazy sun with few or no sunspots will have a weaker magnetic field and deflect a smaller part of the cosmic radiation. The radiation, which enters the atmosphere, will, therefore, be more intense.
The Sun's has a very strong magnetic field, which extends all the way to the orbit of Pluto and beyond. It should be noted that northern lights are created by particles emitted from Sun, which meet Earth's magnetic field. It is not created directly by the Solar magnetic field.
The Danish scientists Eigil Friis-Christensen and Henrik Svensmark have demonstrated that clouds are created by cosmic radiation. This means that when the cosmic radiation, that enters the atmosphere, is strong, the Earth's cloud cover will be extensive, and when the cosmic radiation is weak, Earth's cloud cover will be less extensive.
We imagine generally that clouds are composed of water vapor. It is not the case since water vapor is a transparent gas. Clouds consist of aerosols, which are clumps of molecules of different kinds, mainly water molecules. Aerosols are formed around a particle or ion.
When a cosmic particle enters Earth's atmosphere with tremendous speed, it beats the electrons lose from all the molecules that it hits on its way thereby creating a trail of ions that quickly find together creating aerosols in an atmosphere containing water vapor, the aerosols will then create clouds.
When cosmic radiation particles enter Earth's atmosphere with great energy, they create ions and thus aerosols, which accumulate into clouds. (Photo from NASA)
During the last 100 years, until the beginning of the new millennium, the Sun's magnetic field has doubled. Precisely for this reason, the cosmic radiation that hits Earth dropped by about 15%. This caused that there are now fewer low clouds over the Earth. Low clouds have a cooling effect, and as there have been fewer of them, we probably have here the explanation of the Modern Warm Period. Today Earth's average cloud cover is about 60% - 70%. Small changes in cloud cover bring about changes in climate.
We have often on our own bodies learned that cloud cover has a marked cooling effect. We are lying on the sand at the beach after a swim, bathed in sunshine. Then a cloud passes the sun, and immediately we feel the heat disappear.
During the Little Ice Age, about 1700 AD was a period of practically no sunspots at all. We can, therefore, assume that the solar magnetic field was weak and therefore allowed a great deal of the cosmic rays to enter the Earth's atmosphere. The cosmic rays caused clouds to form to fairly large extent. This extensive cloud cover reflected the sun's rays from their white upper side preventing the sun to heat the Earth, and therefore the Little Ice Age was such a cold period.
Average annual number of sunspots since 1600 from "Climate, History and the
Modern World" by H. H. Lamb.
Eigil Friis-Christensen and Henrik Svensmark's theory, that variations in the solar magnetic field are the cause of climate change, challenges the prevailing theory that man-made CO2 is causing global warming.
Christensen and Svensmark's theory is based on simple assumptions and simple experiments, which can be confirmed by experience of our daily lives; such as that when a cloud passes the sun, you will feel that it gets colder. The fog chamber is a simple device, which has been known since C.T.R. Wilson in 1927 won the Nobel Prize for the invention.
The theory of anthropogenic CO2 as the cause of global warming is far more subtle and speculative, and it requires to a greater extent that common people blindly believe experts.
Top: Earth's temperature since 1880. It can be seen that during the industrial expansion in the postwar period, when most CO2 were emitted, the temperature dropped down to the 1980's ice winters. First with the economic crisis of the eighties it again began to rise - From the documentary "The Great Global Warming Swindle".
Bottom: Traces from trajectories of elementary particles in a Wilson cloud chamber.
The content of CO2 in the atmosphere is today 385 ppm (parts per million), which is 0.0385 %, that is close to four ten thousand parts. It is really not readily obvious that a marginal change in such a small volume can change the climate significantly.
The enormous popular support for the theory of anthropogenic CO2 emissions as the cause of climate change is by all accounts due to that the theory is in tune with our Judeo-Christian culture's idea that we are all sinners. Furthermore, the accusation against the industrial companies, that they are the main culprits of emissions, suits well with the feminist's attack against the white men and the socialist concept of the evil capitalists.
Links, some Danish - some English:
Drivtoemmer og strandvolde afsloerer 10.000 aars variationer i havisen Jens Ramskov in Ingenioeren.
Historic variations in Sea Levels Part 1- from the Holocene to Romans (pdf).
Late Pleistocene and Holocene climate of SE Australia reconstructed from dust and river loads deposited offshore the River Murray Mouth Franz Gingele, Patrick De Deckker og Marc Norman (pdf).
HOCLAT - A web-based Holocene Climate Atlas Heinz Wanner og Stefan Ritz - University of Bern, Bern, Switzerland (pdf).
Mid- to Late Holocene climate change: an overview Heinz Wanner, Jurg Beer, Jonathan Butikofer med flere (pdf).
Drivtoemmer fundet paa Nordgroenland - Youtube Interview med Svend Funder lektor ved Grundforskningscenteret for Geogenetik.
Roman Warming (was it global?) JoNova Tackling tribal groupthink.
Klimaskifte - Fimbulvetr - den store vinter aar 536 e. Kr. Verasir - Som altid behandler Flemming Rickfors emnet meget grundigt.
Variability and extremes of northern Scandinavian summer temperatures over the past two millennia. by Jan Esper, Ulf Buntgen, Mauri Timonen og David C. Frank (pdf).
Not So Hot in East China. World Climate Report.
Om Groenland, dets natur og klimatiske forhold - et uddrag fra kongespejlet. Oversat fra Islandsk og med forord af Chr. Dorph (pdf).
Nordboernes livsgrundlag i Sydvest Groenland. af Naja Mikkelsen og Antoon Kuijpers - Specialartikel fra GEUS' Aarsberetning for 2000.
Vikingerne dyrkede korn paa Groenland. af Sybille Hildebrandt - Videnskab.dk.
Evidence of a medieval warm period in Antarctica. SPPI & CO2SCIENCE ORIGINAL PAPER.
History of Medieval Greenland and associated places, like Iceland and Vinland. Marc Carlson has prepared a very useful list of all known intelligence from Norse Greenland inc. an inventory of sources and links.
What Water Temperatures Can Cod Handle? The Fish Site.
Global and European temperature (CSI 012/CLIM 001) - Assessment published May 2011 by European Environment Agency.
Svensmark: The Cloud Mystery youtube Dokumentar af Lars Oxfeldt Mortensen. - 52 minutter.
Solpletterne forsvinder om faa aar, spaar amerikanske forskere Jens Ramskov - Ingenioeren.
Solarmonitor.org Here you can follow the development of sun spots.
Holocene climatic and environmental changes in the arid and semi-arid areas of China: a review Z. D. Feng, C. B. An og H. B. Wang.
To The Horror Of Global Warming Alarmists, Global Cooling Is Here af Peter Ferrara i Forbes.
The Great Global Warming Swindle - youtube BBC dokumentar.
I never tire of recommending Flemming Rickfors' section on Vinland and Greenland (danish) Fra Groenland til Nyaland - Asernes Aet.
Earth's Climate History (Kindle Edition) by Anton Uriarte.
Climate, History and the Modern World (Kindle Edition) by H. H. Lamb.
Syun-Ichi Akasofu's model of interglacials as result of heat pulses.
Everything is relative, Syun-Ichi Akasofu of the International Arctic Research Center, University of Alaska Fairbanks seems to think. Most believe that the current average global temperature of 14-15 degrees is normal, and ice ages are abnormal. Syun-Ichi believe that the ordinary normal average temperature on our planet is a glacial temperature of about 5 degrees and the 15 degrees occurs only in short periodic heating periods, which happens every approx. 100,000 years. The heat always comes quickly and then disappears returning the climate down to the normal 5 degrees. Syun-Ichi does not mention, but with such a view, you can almost only think that the periodic heat pulses come from the Sun.
The Big Ice Age or The Big Steamy Age? Syun-Ichi Akasofu International Arctic Research Center, University of Alaska Fairbanks.
Muller and MacDonald suggest that the recurring glaciations and interglacials are due to that Earth at regular intervals moves through regions of space with cosmic dust, which is assumed to reduce the solar radiation that Earth receives.
One can visualize the alterative astronomical cycle that Muller and MacDonald have found, which suits the climatic records. Imagine a flat disc with the sun in the middle and the nine planets that orbit around it close to the disc. In fact, all the planets orbit close to such an imaginary disc. It is believed that this imaginary disc space contains more cosmic dust than the rest of the space. At regular intervals Earth's orbit tilts slowly out of the level of the imaginary disc, heat up because of the more clean space, then returns again. When Muller the first time calculated the cycle of Earth's orbit deviation from the Solar System's imaginary disc level in 1993, he found that it repeated every 100,000 years.
Astronomical Theory Offers New Explanation For Ice Age Berkeley Lab - Research News by Jeffery Kahn. June 11, 1997 - explanation of Muller and MacDonald's theory.
|
Object-oriented programming (OOP) is a programming paradigm based on the concept of "objects", which can contain data, in the form of fields (often known as attributes), and code, in the form of procedures (often known as methods). A feature of objects is an object's procedures that can access and often modify the data fields of the object with which they are associated (objects have a notion of "this" or "self"). In OOP, computer programs are designed by making them out of objects that interact with one another. OOP languages are diverse, but the most popular ones are class-based, meaning that objects are instances of classes, which also determine their types.
Object-oriented programming uses objects, but not all of the associated techniques and structures are supported directly in languages that claim to support OOP. The features listed below are common among languages considered to be strongly class- and object-oriented (or multi-paradigm with OOP support), with notable exceptions mentioned.
Modular programming support provides the ability to group procedures into files and modules for organizational purposes. Modules are namespaced so identifiers in one module will not be accidentally confused with a procedure or variable sharing the same name in another file or module.
Languages that support object-oriented programming typically use inheritance for code reuse and extensibility in the form of either classes or prototypes. Those that use classes support two main concepts:
Objects sometimes correspond to things found in the real world. For example, a graphics program may have objects such as "circle", "square", "menu". An online shopping system might have objects such as "shopping cart", "customer", and "product". Sometimes objects represent more abstract entities, like an object that represents an open file, or an object that provides the service of translating measurements from U.S. customary to metric.
Junade Ali, Mastering PHP Design Patterns
Each object is said to be an instance of a particular class (for example, an object with its name field set to "Mary" might be an instance of class Employee). Procedures in object-oriented programming are known as methods; variables are also known as fields, members, attributes, or properties. This leads to the following terms:
Objects are accessed somewhat like variables with complex internal structure, and in many languages are effectively pointers, serving as actual references to a single instance of said object in memory within a heap or stack. They provide a layer of abstraction which can be used to separate internal from external code. External code can use an object by calling a specific instance method with a certain set of input parameters, read an instance variable, or write to an instance variable. Objects are created by calling a special type of method in the class known as a constructor. A program may create many instances of the same class as it runs, which operate independently. This is an easy way for the same procedures to be used on different sets of data.
Object-oriented programming that uses classes is sometimes called class-based programming, while prototype-based programming does not typically use classes. As a result, a significantly different yet analogous terminology is used to define the concepts of object and instance.
In class-based languages the classes are defined beforehand and the objects are instantiated based on the classes. If two objects apple and orange are instantiated from the class Fruit, they are inherently fruits and it is guaranteed that you may handle them in the same way; e.g. a programmer can expect the existence of the same attributes such as color or sugar_content or is_ripe.
In prototype-based languages the objects are the primary entities. No classes even exist. The prototype of an object is just another object to which the object is linked. Every object has one prototype link (and only one). New objects can be created based on already existing objects chosen as their prototype. You may call two different objects apple and orange a fruit, if the object fruit exists, and both apple and orange have fruit as their prototype. The idea of the fruit class doesn't exist explicitly, but as the equivalence class of the objects sharing the same prototype. The attributes and methods of the prototype are delegated to all the objects of the equivalence class defined by this prototype. The attributes and methods owned individually by the object may not be shared by other objects of the same equivalence class; e.g. the attribute sugar_content may be unexpectedly not present in apple. Only single inheritance can be implemented through the prototype.
It is the responsibility of the object, not any external code, to select the procedural code to execute in response to a method call, typically by looking up the method at run time in a table associated with the object. This feature is known as dynamic dispatch, and distinguishes an object from an abstract data type (or module), which has a fixed (static) implementation of the operations for all instances. If the call variability relies on more than the single type of the object on which it is called (i.e. at least one other parameter object is involved in the method choice), one speaks of multiple dispatch.
A method call is also known as message passing. It is conceptualized as a message (the name of the method and its input parameters) being passed to the object for dispatch.
Encapsulation is an object-oriented programming concept that binds together the data and functions that manipulate the data, and that keeps both safe from outside interference and misuse. Data encapsulation led to the important OOP concept of data hiding.
If a class does not allow calling code to access internal object data and permits access through methods only, this is a strong form of abstraction or information hiding known as encapsulation. Some languages (Java, for example) let classes enforce access restrictions explicitly, for example denoting internal data with the
private keyword and designating methods intended for use by code outside the class with the
public keyword. Methods may also be designed public, private, or intermediate levels such as
protected (which allows access from the same class and its subclasses, but not objects of a different class). In other languages (like Python) this is enforced only by convention (for example,
private methods may have names that start with an underscore). Encapsulation prevents external code from being concerned with the internal workings of an object. This facilitates code refactoring, for example allowing the author of the class to change how objects of that class represent their data internally without changing any external code (as long as "public" method calls work the same way). It also encourages programmers to put all the code that is concerned with a certain set of data in the same class, which organizes it for easy comprehension by other programmers. Encapsulation is a technique that encourages decoupling.
Objects can contain other objects in their instance variables; this is known as object composition. For example, an object in the Employee class might contain (either directly or through a pointer) an object in the Address class, in addition to its own instance variables like "first_name" and "position". Object composition is used to represent "has-a" relationships: every employee has an address, so every Employee object has access to a place to store an Address object (either directly embedded within itself, or at a separate location addressed via a pointer).
Languages that support classes almost always support inheritance. This allows classes to be arranged in a hierarchy that represents "is-a-type-of" relationships. For example, class Employee might inherit from class Person. All the data and methods available to the parent class also appear in the child class with the same names. For example, class Person might define variables "first_name" and "last_name" with method "make_full_name". These will also be available in class Employee, which might add the variables "position" and "salary". This technique allows easy re-use of the same procedures and data definitions, in addition to potentially mirroring real-world relationships in an intuitive way. Rather than utilizing database tables and programming subroutines, the developer utilizes objects the user may be more familiar with: objects from their application domain.
Subclasses can override the methods defined by superclasses. Multiple inheritance is allowed in some languages, though this can make resolving overrides complicated. Some languages have special support for mixins, though in any language with multiple inheritance, a mixin is simply a class that does not represent an is-a-type-of relationship. Mixins are typically used to add the same methods to multiple classes. For example, class UnicodeConversionMixin might provide a method unicode_to_ascii when included in class FileReader and class WebPageScraper, which don't share a common parent.
Abstract classes cannot be instantiated into objects; they exist only for the purpose of inheritance into other "concrete" classes which can be instantiated. In Java, the
final keyword can be used to prevent a class from being subclassed.
The doctrine of composition over inheritance advocates implementing has-a relationships using composition instead of inheritance. For example, instead of inheriting from class Person, class Employee could give each Employee object an internal Person object, which it then has the opportunity to hide from external code even if class Person has many public attributes or methods. Some languages, like Go do not support inheritance at all.
The "open/closed principle" advocates that classes and functions "should be open for extension, but closed for modification".
Delegation is another language feature that can be used as an alternative to inheritance.
Subtyping - a form of polymorphism - is when calling code can be agnostic as to which class in the supported hierarchy it is operating on - the parent class or one of its descendants. Meanwhile, the same operation name among objects in an inheritance hierarchy may behave differently.
For example, objects of type Circle and Square are derived from a common class called Shape. The Draw function for each type of Shape implements what is necessary to draw itself while calling code can remain indifferent to the particular type of Shape is being drawn.
This is another type of abstraction which simplifies code external to the class hierarchy and enables strong separation of concerns.
In languages that support open recursion, object methods can call other methods on the same object (including themselves), typically using a special variable or keyword called
self. This variable is late-bound; it allows a method defined in one class to invoke another method that is defined later, in some subclass thereof.
Terminology invoking "objects" and "oriented" in the modern sense of object-oriented programming made its first appearance at MIT in the late 1950s and early 1960s. In the environment of the artificial intelligence group, as early as 1960, "object" could refer to identified items (LISP atoms) with properties (attributes); Alan Kay was later to cite a detailed understanding of LISP internals as a strong influence on his thinking in 1966.
Alan Kay,
Another early MIT example was Sketchpad created by Ivan Sutherland in 1960-61; in the glossary of the 1963 technical report based on his dissertation about Sketchpad, Sutherland defined notions of "object" and "instance" (with the class concept covered by "master" or "definition"), albeit specialized to graphical interaction. Also, an MIT ALGOL version, AED-0, established a direct link between data structures ("plexes", in that dialect) and procedures, prefiguring what were later termed "messages", "methods", and "member functions".
In the 1960s, object-oriented programming was put into practice with the Simula language, which introduced important concepts that are today an essential part of object-oriented programming, such as class and object, inheritance, and dynamic binding. Simula was also designed to take account of programming and data security. For programming security purposes a detection process was implemented so that through reference counts a last resort garbage collector deleted unused objects in the random-access memory (RAM). But although the idea of data objects had already been established by 1965, data encapsulation through levels of scope for variables, such as private (-) and public (+), were not implemented in Simula because it would have required the accessing procedures to be also hidden.
In 1962, Kristen Nygaard initiated a project for a simulation language at the Norwegian Computing Center, based on his previous use of the Monte Carlo simulation and his work to conceptualise real-world systems. Ole-Johan Dahl formally joined the project and the Simula programming language was designed to run on the Universal Automatic Computer (UNIVAC) 1107. In the early stages Simula was supposed to be a procedure package for the programming language ALGOL 60. Dissatisfied with the restrictions imposed by ALGOL the researchers decided to develop Simula into a fully-fledged programming language, which used the UNIVAC ALGOL 60 compiler. Simula launched in 1964, and was promoted by Dahl and Nygaard throughout 1965 and 1966, leading to increasing use of the programming language in Sweden, Germany and the Soviet Union. In 1968, the language became widely available through the Burroughs B5500 computers, and was later also implemented on the URAL-16 computer. In 1966, Dahl and Nygaard wrote a Simula compiler. They became preoccupied with putting into practice Tony Hoare's record class concept, which had been implemented in the free-form, English-like general-purpose simulation language SIMSCRIPT. They settled for a generalised process concept with record class properties, and a second layer of prefixes. Through prefixing a process could reference its predecessor and have additional properties. Simula thus introduced the class and subclass hierarchy, and the possibility of generating objects from these classes. The Simula 1 compiler and a new version of the programming language, Simula 67, was introduced to the wider world through the research paper "Class and Subclass Declarations" at a 1967 conference.
A Simula 67 compiler was launched for the System/360 and System/370 IBM mainframe computers in 1972. In the same year a Simula 67 compiler was launched free of charge for the French CII 10070 and CII Iris 80 mainframe computers. By 1974, the Association of Simula Users had members in 23 different countries. Early 1975 a Simula 67 compiler was released free of charge for the DecSystem-10 mainframe family. By August the same year the DecSystem Simula 67 compiler had been installed at 28 sites, 22 of them in North America. The object-oriented Simula programming language was used mainly by researchers involved with physical modelling, such as models to study and improve the movement of ships and their content through cargo ports.
In the 1970s, the first version of the Smalltalk programming language was developed at Xerox PARC by Alan Kay, Dan Ingalls and Adele Goldberg. Smaltalk-72 included a programming environment and was dynamically typed, and at first was interpreted, not compiled. Smalltalk got noted for its application of object orientation at the language level and its graphical development environment. Smalltalk went through various versions and interest in the language grew. While Smalltalk was influenced by the ideas introduced in Simula 67 it was designed to be a fully dynamic system in which classes could be created and modified dynamically.
In the 1970s, Smalltalk influenced the Lisp community to incorporate object-based techniques that were introduced to developers via the Lisp machine. Experimentation with various extensions to Lisp (such as LOOPS and Flavors introducing multiple inheritance and mixins) eventually led to the Common Lisp Object System, which integrates functional programming and object-oriented programming and allows extension via a Meta-object protocol. In the 1980s, there were a few attempts to design processor architectures that included hardware support for objects in memory but these were not successful. Examples include the Intel iAPX 432 and the Linn Smart Rekursiv.
In 1981, Goldberg edited the August 1981 issue of Byte Magazine, introducing Smalltalk and object-oriented programming to a wider audience. In 1986, the Association for Computing Machinery organised the first Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), which was unexpectedly attended by 1,000 people. In the mid-1980s Objective-C was developed by Brad Cox, who had used Smalltalk at ITT Inc., and Bjarne Stroustrup, who had used Simula for his PhD thesis, eventually went to create the object-oriented C++. In 1985, Bertrand Meyer also produced the first design of the Eiffel language. Focused on software quality, Eiffel is a purely object-oriented programming language and a notation supporting the entire software lifecycle. Meyer described the Eiffel software development method, based on a small number of key ideas from software engineering and computer science, in Object-Oriented Software Construction. Essential to the quality focus of Eiffel is Meyer's reliability mechanism, Design by Contract, which is an integral part of both the method and language.
In the early and mid-1990s object-oriented programming developed as the dominant programming paradigm when programming languages supporting the techniques became widely available. These included Visual FoxPro 3.0, C++, and Delphi. Its dominance was further enhanced by the rising popularity of graphical user interfaces, which rely heavily upon object-oriented programming techniques. An example of a closely related dynamic GUI library and OOP language can be found in the Cocoa frameworks on Mac OS X, written in Objective-C, an object-oriented, dynamic messaging extension to C based on Smalltalk. OOP toolkits also enhanced the popularity of event-driven programming (although this concept is not limited to OOP).
At ETH Zürich, Niklaus Wirth and his colleagues had also been investigating such topics as data abstraction and modular programming (although this had been in common use in the 1960s or earlier). Modula-2 (1978) included both, and their succeeding design, Oberon, included a distinctive approach to object orientation, classes, and such.
Object-oriented features have been added to many previously existing languages, including Ada, BASIC, Fortran, Pascal, and COBOL. Adding these features to languages that were not initially designed for them often led to problems with compatibility and maintainability of code.
More recently, a number of languages have emerged that are primarily object-oriented, but that are also compatible with procedural methodology. Two such languages are Python and Ruby. Probably the most commercially important recent object-oriented languages are Java, developed by Sun Microsystems, as well as C# and Visual Basic.NET (VB.NET), both designed for Microsoft's .NET platform. Each of these two frameworks shows, in its own way, the benefit of using OOP by creating an abstraction from implementation. VB.NET and C# support cross-language inheritance, allowing classes defined in one language to subclass classes defined in the other language.
This section does not cite any sources. (August 2009) (Learn how and when to remove this template message)
Simula (1967) is generally accepted as being the first language with the primary features of an object-oriented language. It was created for making simulation programs, in which what came to be called objects were the most important information representation. Smalltalk (1972 to 1980) is another early example, and the one with which much of the theory of OOP was developed. Concerning the degree of object orientation, the following distinctions can be made:
In recent years, object-oriented programming has become especially popular in dynamic programming languages. Python, PowerShell, Ruby and Groovy are dynamic languages built on OOP principles, while Perl and PHP have been adding object-oriented features since Perl 5 and PHP 4, and ColdFusion since version 6.
The messages that flow between computers to request services in a client-server environment can be designed as the linearizations of objects defined by class objects known to both the client and the server. For example, a simple linearized object would consist of a length field, a code point identifying the class, and a data value. A more complex example would be a command consisting of the length and code point of the command and values consisting of linearized objects representing the command's parameters. Each such command must be directed by the server to an object whose class (or superclass) recognizes the command and is able to provide the requested service. Clients and servers are best modeled as complex object-oriented structures. Distributed Data Management Architecture (DDM) took this approach and used class objects to define objects at four levels of a formal hierarchy:
The initial version of DDM defined distributed file services. It was later extended to be the foundation of Distributed Relational Database Architecture (DRDA).
Challenges of object-oriented design are addressed by several approaches. Most common is known as the design patterns codified by Gamma et al.. More broadly, the term "design patterns" can be used to refer to any general, repeatable, solution pattern to a commonly occurring problem in software design. Some of these commonly occurring problems have implications and solutions particular to object-oriented development.
It is intuitive to assume that inheritance creates a semantic "is a" relationship, and thus to infer that objects instantiated from subclasses can always be safely used instead of those instantiated from the superclass. This intuition is unfortunately false in most OOP languages, in particular in all those that allow mutable objects. Subtype polymorphism as enforced by the type checker in OOP languages (with mutable objects) cannot guarantee behavioral subtyping in any context. Behavioral subtyping is undecidable in general, so it cannot be implemented by a program (compiler). Class or object hierarchies must be carefully designed, considering possible incorrect uses that cannot be detected syntactically. This issue is known as the Liskov substitution principle.
Design Patterns: Elements of Reusable Object-Oriented Software is an influential book published in 1994 by Erich Gamma, Richard Helm, Ralph Johnson, and John Vlissides, often referred to humorously as the "Gang of Four". Along with exploring the capabilities and pitfalls of object-oriented programming, it describes 23 common programming problems and patterns for solving them. As of April 2007, the book was in its 36th printing.
The book describes the following patterns:
Both object-oriented programming and relational database management systems (RDBMSs) are extremely common in software today . Since relational databases don't store objects directly (though some RDBMSs have object-oriented features to approximate this), there is a general need to bridge the two worlds. The problem of bridging object-oriented programming accesses and data patterns with relational databases is known as object-relational impedance mismatch. There are a number of approaches to cope with this problem, but no general solution without downsides. One of the most common approaches is object-relational mapping, as found in IDE languages such as Visual FoxPro and libraries such as Java Data Objects and Ruby on Rails' ActiveRecord.
There are also object databases that can be used to replace RDBMSs, but these have not been as technically and commercially successful as RDBMSs.
OOP can be used to associate real-world objects and processes with digital counterparts. However, not everyone agrees that OOP facilitates direct real-world mapping (see Criticism section) or that real-world mapping is even a worthy goal; Bertrand Meyer argues in Object-Oriented Software Construction that a program is not a model of the world but a model of some part of the world; "Reality is a cousin twice removed". At the same time, some principal limitations of OOP have been noted. For example, the circle-ellipse problem is difficult to handle using OOP's concept of inheritance.
However, Niklaus Wirth (who popularized the adage now known as Wirth's law: "Software is getting slower more rapidly than hardware becomes faster") said of OOP in his paper, "Good Ideas through the Looking Glass", "This paradigm closely reflects the structure of systems 'in the real world', and it is therefore well suited to model complex systems with complex behaviours" (contrast KISS principle).
Steve Yegge and others noted that natural languages lack the OOP approach of strictly prioritizing things (objects/nouns) before actions (methods/verbs). This problem may cause OOP to suffer more convoluted solutions than procedural programming.
OOP was developed to increase the reusability and maintainability of source code. Transparent representation of the control flow had no priority and was meant to be handled by a compiler. With the increasing relevance of parallel hardware and multithreaded coding, developing transparent control flow becomes more important, something hard to achieve with OOP.
Responsibility-driven design defines classes in terms of a contract, that is, a class should be defined around a responsibility and the information that it shares. This is contrasted by Wirfs-Brock and Wilkerson with data-driven design, where classes are defined around the data-structures that must be held. The authors hold that responsibility-driven design is preferable.
SOLID is a mnemonic invented by Michael Feathers that stands for and advocates five programming practices:
The OOP paradigm has been criticised for a number of reasons, including not meeting its stated goals of reusability and modularity, and for overemphasizing one aspect of software design and modeling (data/objects) at the expense of other important aspects (computation/algorithms).
Luca Cardelli has claimed that OOP code is "intrinsically less efficient" than procedural code, that OOP can take longer to compile, and that OOP languages have "extremely poor modularity properties with respect to class extension and modification", and tend to be extremely complex. The latter point is reiterated by Joe Armstrong, the principal inventor of Erlang, who is quoted as saying:
The problem with object-oriented languages is they've got all this implicit environment that they carry around with them. You wanted a banana but what you got was a gorilla holding the banana and the entire jungle.
A study by Potok et al. has shown no significant difference in productivity between OOP and procedural approaches.
Christopher J. Date stated that critical comparison of OOP to other technologies, relational in particular, is difficult because of lack of an agreed-upon and rigorous definition of OOP; however, Date and Darwen have proposed a theoretical foundation on OOP that uses OOP as a kind of customizable type system to support RDBMS.
In an article Lawrence Krubner claimed that compared to other languages (LISP dialects, functional languages, etc.) OOP languages have no unique strengths, and inflict a heavy burden of unneeded complexity.
I find OOP technically unsound. It attempts to decompose the world in terms of interfaces that vary on a single type. To deal with the real problems you need multisorted algebras -- families of interfaces that span multiple types. I find OOP philosophically unsound. It claims that everything is an object. Even if it is true it is not very interesting -- saying that everything is an object is saying nothing at all.
Paul Graham has suggested that OOP's popularity within large companies is due to "large (and frequently changing) groups of mediocre programmers". According to Graham, the discipline imposed by OOP prevents any one programmer from "doing too much damage".
Object Oriented Programming puts the Nouns first and foremost. Why would you go to such lengths to put one part of speech on a pedestal? Why should one kind of concept take precedence over another? It's not as if OOP has suddenly made verbs less important in the way we actually think. It's a strangely skewed perspective.
Rich Hickey, creator of Clojure, described object systems as overly simplistic models of the real world. He emphasized the inability of OOP to model time properly, which is getting increasingly problematic as software systems become more concurrent.
Eric S. Raymond, a Unix programmer and open-source software advocate, has been critical of claims that present object-oriented programming as the "One True Solution", and has written that object-oriented programming languages tend to encourage thickly layered programs that destroy transparency. Raymond compares this unfavourably to the approach taken with Unix and the C programming language.
Rob Pike, a programmer involved in the creation of UTF-8 and Go, has called object-oriented programming "the Roman numerals of computing" and has said that OOP languages frequently shift the focus from data structures and algorithms to types. Furthermore, he cites an instance of a Java professor whose "idiomatic" solution to a problem was to create six new classes, rather than to simply use a lookup table.
Objects are the run-time entities in an object-oriented system. They may represent a person, a place, a bank account, a table of data, or any item that the program has to handle.
There have been several attempts at formalizing the concepts used in object-oriented programming. The following concepts and constructs have been used as interpretations of OOP concepts:
Attempts to find a consensus definition or theory behind objects have not proven very successful (however, see Abadi & Cardelli, A Theory of Objects for formal definitions of many OOP concepts and constructs), and often diverge widely. For example, some definitions focus on mental activities, and some on program structuring. One of the simpler definitions is that OOP is the act of using "map" data structures or arrays that can contain functions and pointers to other maps, all with some syntactic and scoping sugar on top. Inheritance can be performed by cloning the maps (sometimes called "prototyping").
Perhaps the greatest strength of an object-oriented approach to development is that it offers a mechanism that captures a model of the real world.
In the local M.I.T. patois, association lists [of atomic symbols] are also referred to as "property lists", and atomic symbols are sometimes called "objects".
Object -- a synonym for atomic symbol
Manage research, learning and skills at IT1me. Create an account using LinkedIn to manage and organize your IT knowledge. IT1me works like a shopping cart for information -- helping you to save, discuss and share.
|
German nationalism is the nationalist idea that Germans are a nation, promotes the unity of Germans into a nation state, and emphasizes and takes pride in the national identity of Germans. The earliest origins of German nationalism began with the birth of romantic nationalism during the Napoleonic Wars when Pan-Germanism started to rise. Advocacy of a German nation state began to become an important political force in response to the invasion of German territories by France under Napoleon.
In the 19th century Germans debated the German Question over whether the German nation state should comprise a "Lesser Germany" that excluded Austria or a "Greater Germany" that included Austria. The faction led by Prussian Chancellor Otto von Bismarck succeeded in forging a Lesser Germany.
Aggressive German nationalism is viewed as having been a key factor in causing both World Wars. In World War II, the Nazis sought to create a Greater Germanic Reich, emphasizing ethnic German identity and German greatness to the exclusion of all others by exterminating Jews, Gypsies, and other peoples in the Holocaust.
After the defeat of the Nazis, Germany was divided into East and West Germany in the opening acts of the Cold War, and each state retained a sense of German identity and held reunification as a goal, albeit in different contexts. The creation of the European Union was in part an effort to harness German identity to a European identity. West Germany underwent its economic miracle following the war, which led to the creation of guest worker program; many of these workers ended up settling in Germany which has led to tensions around questions of national and cultural identity, especially with regard to Turks who settled in Germany.
German reunification was achieved in 1990 following Die Wende; an event that caused some alarm both inside and outside Germany. Germany has emerged as a power inside Europe and in the world; its role in the European debt crisis and in the European migrant crisis have led to criticism of German authoritarian abuse of its power, especially with regard to the Greek debt crisis, and raised questions within and without Germany as to Germany's role in the world.
German nationalism has been generally viewed in the country as taboo and people within Germany have struggled to find ways to acknowledge its past but take pride in its past and present accomplishments - the German question has never been fully resolved in this regard. A wave of national pride swept the country when it hosted the 2006 FIFA World Cup. Far-right parties that stress German national identity and pride and seek to exclude or denigrate non-Germans have existed since the end of World War II but have never governed.
Outside modern-day Germany in Austria, there are Austrian nationalists who have rejected the unification of Austria with Germany on the basis of preserving Austrians' Catholic religious identity from the potential danger posed by being part of a Protestant-majority Germany.
- 1 History
- 2 German nationalism in Austria
- 3 Symbols
- 4 Nationalist political parties
- 5 See also
- 6 References
- 7 Further reading
Defining a German nation
Defining a German nation based on internal characteristics presented difficulties. Since the start of the Reformation in the 16th century, the German lands had been divided between Catholics and Lutherans and linguistic diversity was large as well. Today, the Swabian, Bavarian, Saxon and Cologne dialects in their most pure forms are estimated to be 40% mutually intelligible with modern Standard German, meaning that in a conversation between a native speaker of any of these dialects and a person who speaks only standard German, the latter will be able to understand slightly less than half of what is being said without any prior knowledge of the dialect, a situation which is likely to have been similar or greater in the 19th century.
Nationalism among the Germans first developed not among the general populace but among the intellectual elites of various German states. The early German nationalist Friedrich Karl von Moser, writing in the mid 18th century, remarked that, compared with "the British, Swiss, Dutch and Swedes", the Germans lacked a "national way of thinking". However, the cultural elites themselves faced difficulties in defining the German nation, often resorting to broad and vague concepts: the Germans as a "Sprachnation" (a people unified by the same language), a "Kulturnation" (a people unified by the same culture) or an "Erinnerungsgemeinschaft" (a community of remembrance, i.e. sharing a common history). Johann Gottlieb Fichte – considered the founding father of German nationalism – devoted the 4th of his Addresses to the German Nation (1808) to defining the German nation and did so in a very broad manner. In his view, there existed a dichotomy between the people of Germanic descent. There were those who had left their fatherland (which Fichte considered to be Germany) during the time of the Migration Period and had become either assimilated or heavily influenced by Roman language, culture and customs, and those who stayed in their native lands and continued to hold on to their own culture.
Later German nationalists were able to define their nation more precisely, especially following the rise of Prussia and formation of the German Empire in 1871 which gave the majority of the German-speakers in Europe a common political, economic and educational framework. In the late 19th century and early 20th century, some German nationalist added elements of racial ideology, ultimately culminating in the Nuremberg Laws, sections of which sought to determine by law and genetics who was to be considered German.
It was not until the concept of nationalism itself was developed by German philosopher Johann Gottfried Herder that German nationalism began. German nationalism was Romantic in nature that were based upon the principles of collective self-determination, territorial unification and cultural identity, and a political and cultural programme to achieve those ends. The German Romantic nationalism derived from the Enlightenment era philosopher Jean Jacques Rousseau's and French Revolutionary philosopher Emmanuel-Joseph Sieyès' ideas of naturalism and that legitimate nations must have been conceived in the state of nature. This emphasis on the naturalness of ethno-linguistic nations continued to be upheld by the early 19th century Romantic German nationalists Johann Gottlieb Fichte, Ernst Moritz Arndt, and Friedrich Ludwig Jahn who all were proponents of Pan-Germanism.
The invasion of the Holy Roman Empire (HRE) by Napoleon's French Empire and its subsequent dissolution brought about a German liberal nationalism as advocated primarily by the German middle-class bourgeoisie who advocated the creation of a modern German nation-state based upon liberal democracy, constitutionalism, representation, and popular sovereignty while opposing absolutism. Fichte in particular brought German nationalism forward as a response to the French occupation of German territories in his Addresses to the German Nation (1808), evoking a sense of German distinctiveness in language, tradition, and literature that composed a common identity.
After the defeat of France in the Napoleonic Wars at the Congress of Vienna, German nationalists tried but failed to establish Germany as a nation-state, instead the German Confederation was created that was a loose collection of independent German states that lacked strong federal institutions. Economic integration between the German states was achieved by the creation of the Zollverein ("Custom Union") of Germany in 1818 that existed until 1866. The move to create the Zollverein was led by Prussia and the Zollverein was dominated by Prussia, causing resentment and tension between Austria and Prussia.
Revolutions of 1848 to German Unification of 1871
The Revolutions of 1848 led to revolution in various German states. Liberal nationalists did not seize power in a number of German states and an all-German parliament was created in Frankfurt in May 1848. The Frankfurt Parliament attempted to create a national constitution for all German states but rivalry between Prussian and Austrian interests resulted in proponents of the parliament advocating a "small German" solution (a monarchical German nation-state without Austria) with the imperial crown of Germany being granted to the King of Prussia. The King of Prussia refused the offer and efforts to create a liberal German nation-state faltered and collapsed.
In the aftermath of the failed attempt to establish a liberal German nation-state, rivalry between Prussia and Austria intensified under the agenda of Prussian Chancellor Otto von Bismarck who blocked all attempts by Austria to join the Zollverein. A division developed among German nationalists, with one group led by the Prussians that supported a "Lesser Germany" that excluded Austria and another group that supported a "Greater Germany" that included Austria. The Prussians sought a Lesser Germany to allow Prussia to assert hegemony over Germany that would not be guaranteed in a Greater Germany.
By the late 1850s German nationalists emphasized military solutions. The mood was fed by hatred of the French, a fear of Russia, a rejection of the 1815 Vienna settlement, and a cult of patriotic hero-warriors. War seemed to be a desirable means of speeding up change and progress. Nationalists thrilled to the image of the entire people in arms. Bismarck harnessed the national movement's martial pride and desire for unity and glory to weaken the political threat the liberal opposition posed to Prussia's conservatism.
Prussia achieved hegemony over Germany in the "wars of unification": the Second Schleswig War (1864), the Austro-Prussian War [which effectively excluded Austria from Germany] (1866), and the Franco-Prussian War (1870). A German nation-state was founded in 1871 called the German Empire as a Lesser Germany with the King of Prussia taking the throne of German Emperor (Deutscher Kaiser) and Bismarck becoming Chancellor of Germany.
1871 to World War I, 1914–1918
Unlike the prior German nationalism of 1848 that was based upon liberal values, the German nationalism utilized by supporters of the German Empire was based upon Prussian authoritarianism, and was conservative, reactionary, anti-Catholic, anti-liberal and anti-socialist in nature. The German Empire's supporters advocated a Germany based upon Prussian and Protestant cultural dominance. This German nationalism focused on German identity based upon the historical crusading Teutonic Order. These nationalists supported a German national identity claimed to be based on Bismarck's ideals that included Teutonic values of willpower, loyalty, honesty, and perseverance.
The Catholic-Protestant divide in Germany at times created extreme tension and hostility between Catholic and Protestant Germans after 1871, such as in response to the policy of Kulturkampf in Prussia by German Chancellor and Prussian Prime Minister Otto von Bismarck, that sought to dismantle Catholic culture in Prussia, that provoked outrage amongst Germany's Catholics and resulted in the rise of the pro-Catholic Centre Party and the Bavarian People's Party.
There have been rival nationalists within Germany, particularly Bavarian nationalists who claim that the terms that Bavaria entered into Germany in 1871 were controversial and have claimed the German government has long intruded into the domestic affairs of Bavaria.≈
German nationalists in the German Empire who advocated a Greater Germany during the Bismarck era focused on overcoming dissidence by Protestant Germans to the inclusion of Catholic Germans in the state by creating the Los von Rom! ("Away from Rome!") movement that advocated assimilation of Catholic Germans to Protestantism. During the time of the German Empire, a third faction of German nationalists (especially in the Austrian parts of the Austria-Hungary Empire) advocated a strong desire for a Greater Germany but, unlike earlier concepts, led by Prussia instead of Austria; they were known as Alldeutsche.
An important element of German nationalism as promoted by the government and intellectual elite was the emphasis on Germany asserting itself as a world economic and military power, aimed at competing with France and the British Empire for world power. German colonial rule in Africa 1884-1914 was an expression of nationalism and moral superiority that was justified by constructing employing an image of the natives as "Other". This approach highlighted racist views of mankind. German colonization was characterized by the use of repressive violence in the name of ‘culture’ and ‘civilization’, concepts that had their origins in the Enlightenment. Germany's cultural-missionary project boasted that its colonial programs were humanitarian and educational endeavors. Furthermore, the wide acceptance among intellectuals of social Darwinism justified Germany's right to acquire colonial territories as a matter of the ‘survival of the fittest’, according to historian Michael Schubert.
Interwar period, 1918-1933
The government established after WWI, the Weimar republic, established a law of nationality that was based on pre-unification notions of the German volk as an ethno-racial group defined more by heredity than modern notions of citizenship; the laws were intended to include Germans who had immigrated and to exclude immigrant groups. These laws remained the basis of German citizenship laws until after reunification.
The government and economy of the Weimar republic was weak; Germans were dissatisfied with the government, the punitive conditions of war reparations and territorial losses of the Treaty of Versailles as well as the effects of hyperinflation. Economic, social, and political cleavages fragmented Germany's society. Eventually the Weimar Republic collapsed under these pressures and the political maneuverings of leading German officials and politicians.
Nazi Germany, 1933-1945
The Nazi Party (NSDAP), led by Austrian-born Adolf Hitler, believed in an extreme form of German nationalism. The first point of the Nazi 25-point programme was that "We demand the unification of all Germans in the Greater Germany on the basis of the people's right to self-determination". Hitler, an Austrian-German by birth, began to develop his strong patriotic German nationalist views from a very young age. He was greatly influenced by many other Austrian pan-German nationalists in Austria-Hungary, notably Georg Ritter von Schönerer and Karl Lueger. Hitler's pan-German ideas envisioned a Greater German Reich which was to include the Austrian Germans, Sudeten Germans and other ethnic Germans. The annexing of Austria (Anschluss) and the Sudetenland (annexing of Sudetenland) completed Nazi Germany's desire to the German nationalism of the German Volksdeutsche (people/folk).
1945 to the present
After WWII, the German nation was divided in two states, West Germany and East Germany, and some former German territories east of the Oder–Neisse line were made part of Poland. The Basic Law for the Federal Republic of Germany which served as the constitution for West Germany was conceived and written as a provisional document, with the hope of reuniting East and West Germany in mind.
The formation of the European Economic Community, and latterly the European Union, was driven in part by forces inside and outside Germany that sought to embed Germany identity more deeply in a broader European identity, in a kind of "collaborative nationalism".:32
The reunification of Germany became a central theme in West German politics, and was made a central tenet of the East German Socialist Unity Party of Germany, albeit in the context of a Marxist vision of history in which the government of West Germany would be swept away in a proletarian revolution.
The question of Germans and former German territory in Poland, as well as the status of Königsberg as part of Russia, remained hard, with people in West Germany advocating to take that territory back through the 1960s. East Germany confirmed the border with Poland in 1950, while West Germany, after a period of refusal, finally accepted the border (with reservations) in 1970.
The desire of the German people to be one nation again remained strong, but was accompanied by a feeling of hopelessness through the 1970s and into the 1980s; Die Wende, when it arrived in the late 1980s driven by the East German people, came as a surprise, leading to the 1990 elections which put a government in place that negotiated the Treaty on the Final Settlement with Respect to Germany and reunited East and West Germany, and the process of inner reunification began.
The reunification was opposed in several quarters both inside and outside Germany, including Margaret Thatcher, Jürgen Habermas, and Günter Grass, out of fear of that a united Germany might resume its aggression toward other countries. Just prior to reunification West Germany had gone through a national debate, called Historikerstreit, over how to regard its Nazi past, with one side claiming that there was nothing specifically German about Nazism, and that the German people should let go its shame over the past and look forward, proud of its national identity, and others holding that Nazism grew out of German identity and the nation needed to remain responsible for its past and guard carefully against any recrudescence of Nazism. This debate did not give comfort to those concerned about whether a reunited Germany might be a danger to other countries, nor did the rise of skinhead neo-nazi groups in the former East Germany, as exemplified by riots in Hoyerswerda in 1991. An identity-based nationalist backlash arose after unification as people reached backward to answer "the German question", leading to violence by four Neo-Nazi/far-right parties which were all banned by Germany's Federal Constitutional Court after committing or inciting violence: the Nationalist Front, National Offensive, German Alternative, and the Kamaradenbund.:44
One of the key questions for the reunified government, was how to define a German citizen. The laws inherited from the Weimar republic that based citizenship on heredity had been taken to their extreme by the Nazis and were unpalatable and fed the ideology of German far-right nationalist parties like the National Democratic Party of Germany (NDP) which was founded in 1964 from other far-right groups. Additionally, West Germany had received large numbers of immigrants (especially Turks), membership in the European Union meant that people could move more or less freely across national borders within Europe, and due to its declining birthrate even united Germany needed to receive about 300,000 immigrants per year in order to maintain its workforce. (Germany had been importing workers ever since its post-war "economic miracle" through its Gastarbeiter program.) The Christian Democratic Union/Christian Social Union government that was elected throughout the 1990s did not change the laws, but around 2000 a new coalition led by the Social Democratic Party of Germany came to power and made changes to the law defining who was a German based on jus soli rather than jus sanguinis.
The issue of how to address its Turkish population has remained a difficult issue in Germany; many Turks have not integrated and have formed a parallel society inside Germany, and issues of using education or legal penalties to drive integration have roiled Germany from time to time, and issues of what a "German" is, accompany debates about "the Turkish question".
Pride in being German remained a difficult issue; one of the surprises of the 2006 FIFA World Cup which was held in Germany, were widespread displays of national pride by Germans, which seemed to take even the Germans themselves by surprise and cautious delight.
Germany's role in managing the European debt crisis, especially with regard to the Greek government-debt crisis, led to criticism from some quarters, especially within Greece, of Germany wielding its power in a harsh and authoritarian way that was reminiscent of its authoritarian past and identity.
Tensions over the European debt crisis and the European migrant crisis and the rise of right-wing populism sharpened questions of German identity around 2010. The Alternative for Germany party was created in 2012 as a backlash against further European integration and bailouts of other countries during the European debt crisis; from its founding to 2017 the party took on nationalist and populist stances, rejecting German guilt over the Nazi era and calling for Germans to take pride in their history and accomplishments. In the 2014 European Parliament election, the NDP won their first ever seat in the European Parliament.
German nationalism in Austria
After the Revolutions of 1848/49, in which the liberal nationalistic revolutionaries advocated the Greater German solution, the Austrian defeat in the Austro-Prussian War (1866) with the effect that Austria was now excluded from Germany, and increasing ethnic conflicts in the Habsburg Monarchy of the Austro-Hungarian Empire, a German national movement evolved in Austria. Led by the radical German nationalist and anti-semite Georg von Schönerer, organisations like the Pan-German Society demanded the link-up of all German-speaking territories of the Danube Monarchy to the German Empire, and decidedly rejected Austrian patriotism. Schönerer's völkisch and racist German nationalism was an inspiration to Hitler's ideology. In 1933, Austrian Nazis and the national-liberal Greater German People's Party formed an action group, fighting together against the Austrofascist regime which imposed a distinct Austrian national identity. Whilst it violated the Treaty of Versailles terms, Hitler, a native of Austria, unified the two German states together "(Anschluss)" in 1938. This meant the historic aim of Austria's German nationalists was achieved and a Greater German Reich briefly existed until the end of the war. After 1945, the German national camp was revived in the Federation of Independents and the Freedom Party of Austria.
Flag of the German Empire, originally designed in 1867 for the North German Federation, it was adopted as the flag of Germany in 1871. This flag was used by opponents of the Weimar Republic who saw the black-red-yellow flag as a symbol of it. Recently it has been used by far-right nationalists in Germany.
Nationalist political parties
- Freedom Party of Austria (1956–present)
- National Democratic Party of Germany (1964–present)
- The Republicans (1983–present)
- Alternative for Germany (2013–present)
- German National People's Party (1918–33)
- German Workers' Party (1919–20)
- Greater German People's Party (1920–34)
- National Socialist German Workers' Party (1920–45)
- Deutsche Rechtspartei (1946–50)
- Federation of Independents (1949–55)
- Socialist Reich Party (1949–52)
- Deutsche Reichspartei (1950–64)
- German Social Union (1956–62)
- Free German Workers' Party (1979–95)
- German People's Union (1987–2011)
- National Offensive (1990–92)
- German National People's Party
- Frankfurt Parliament
- German question
- Unification of Germany
- German reunification
- Related nationalisms
- Völkisch movement
- Verheyen 1999, pp. 8.
- Motyl 2001, pp. 190.
- Spohn, Willfried (2005), "Austria: From Habsburg Empire to a Small Nation in Europe", Entangled identities: nations and Europe, Ashgate, p. 61
- Ethnologue, mutual intelligibility of German dialects / Languages of Germany.
- Jansen, Christian (2011), "The Formation of German Nationalism, 1740-1850," in: Helmut Walser Smith (Ed.), The Oxford Handbook of Modern German History. Oxford: Oxford University Press. p. 234-259; here: p. 239-240.
- The German Opposition to Hitler, Michael C. Thomsett (1997) p7.
- Address to the German Nation, p52.
- The German Opposition to Hitler, Michael C. Thomsett (1997)
- Motyl 2001, pp. 189-190.
- Smith 2010, pp. 24.
- Smith 2010, pp. 41.
- Verheyen 1999, pp. 7.
- Jusdanis 2001, pp. 82-83.
- Verheyen 1999, pp. 7-8.
- Frank Lorenz Müller, "The Spectre of a People in Arms: The Prussian Government and the Militarisation of German Nationalism, 1859-1864," English Historical Review (2007) 122#495 pp 82-104. in JSTOR
- Verheyen 1999, pp. 8, 25.
- Kesselman 2009, pp. 181.
- Samson 2002, pp. 440.
- Gerwarth 2005, pp. 20.
- Wolfram Kaiser, Helmut Wohnout. Political Catholicism in Europe, 1918-45. London, England, UK; New York, New York, USA: Routledge, 2004. P. 40.
- James Minahan. One Europe, Many Nations: A Historical Dictionary of European National Groups. Greenwood Publishing Group, Ltd., 2000. P. 108.
- Seton-Watson 1977, pp. 98.
- Verheyen 1999, pp. 24.
- Michael Schubert, "The ‘German nation’ and the ‘black Other’: social Darwinism and the cultural mission in German colonial discourse," Patterns of Prejudice (2011) 45#5 pp 399-416.
- Felicity Rash, The Discourse Strategies of Imperialist Writing: The German Colonial Idea and Africa, 1848-1945 (Routledge, 2016).
- Berdahl, Robert M. (2005). "German Reunification in Historical Perspective". The Berkeley Journal of International Law. 23 (2). doi:10.15779/Z38RS8N.
- Cameron, Keith (1999). National Identity. Intellect Books. ISBN 9781871516050.
- Posener, Alan (20 June 2016). "German nationalism can only be contained by a united Europe". The Guardian.
- Jessup, John E. (1998). An encyclopedic dictionary of conflict and conflict resolution, 1945-1996. Westport, Conn.: Greenwood Press. p. 543. ISBN 978-0313281129.
- Brown, Timothy S. (1 January 2004). "Subcultures, Pop Music and Politics: Skinheads and "Nazi Rock" in England and Germany". Journal of Social History. 38 (1): 157–178. JSTOR 3790031.
- "National Democratic Party of Germany (NPD)". Encyclopædia Britannica. Retrieved 9 November 2015.
- Conradt, David P. Germany's New Politics: Parties and Issues in the 1990s. Berghahn Books. p. 258. ISBN 9781571810335.
- "History of the Guest Workers". German Missions in the United States. Retrieved 14 May 2017.
- "A Study says Turks are Germany's worst integrated immigrants". Retrieved 18 May 2016.
- "Immigration: Survey Shows Alarming Lack of Integration in Germany". Retrieved 18 May 2016.
- "The Welfare Use of Immigrants and Natives in Germany: The Case of Turkish Immigrants" (PDF). Retrieved 25 January 2017.
- Prevezanos, Klaudia (30 October 2011). "Turkish guest workers transformed German society | Germany and Turkey - A difficult relationship | DW.COM | 30.10.2011". Deutsche Welle.
- Bernstein, Richard (18 June 2006). "In World Cup Surprise, Flags Fly With German Pride". The New York Times.
- Harding, Luke (29 June 2006). "Germany revels in explosion of national pride and silly headgear". The Guardian.
- Shuster, Simon (July 15, 2015). "Germany Finds Itself Playing the Villain in Greek Drama". Time.
- Wagstyl, Stefan (July 15, 2015). "Merkel's tough tactics prompt criticism in Germany and abroad". Financial Times.
- Cohen, Roger (13 July 2015). "The German Question Redux". The New York Times.
- Taub, Amanda; Fisher, Max (18 January 2017). "Germany's Extreme Right Challenges Guilt Over Nazi Past". The New York Times.
- "Understanding the 'Alternative for Germany': Origins, Aims and Consequences" (PDF). University of Denver. November 16, 2016. Retrieved 29 April 2017.
- Beyer, Susanne; Fleischhauer, Jan (March 30, 2016). "AfD Head Frauke Petry: 'The Immigration of Muslims Will Change Our Culture'". Der Spiegel.
- "Meet the new faces ready to sweep into the European parliament". The Guardian. 26 May 2014. Retrieved 11 January 2015.
- Andrew Gladding Whiteside, The Socialism of Fools: Georg Ritter von Schönerer and Austrian Pan-Germanism (U of California Press, 1975).
- Ian Kershaw (2000). Hitler: 1889-1936 Hubris. pp. 33–34, 63–65.
- Morgan, Philip (2003). Fascism in Europe, 1919-1945. Routledge. p. 72. ISBN 0-415-16942-9.
- Bideleux, Robert; Jeffries, Ian (1998), A history of eastern Europe: Crisis and Change, Routledge, p. 355
- Anton Pelinka, Right-Wing Populism Plus "X": The Austrian Freedom Party (FPÖ). Challenges to Consensual Politics: Democracy, Identity, and Populist Protest in the Alpine Region (Brussels: P.I.E.-Peter Lang, 2005) pp. 131–146.
- Gerwarth, Robert (2005). The Bismarck myth: Weimar Germany and the legacy of the Iron Chancellor. Oxford, England, UK: Oxford University Press. ISBN 0-19-928184-X.
- Hagemann, Karen. "Of 'manly valor' and 'German Honor': nation, war, and masculinity in the age of the Prussian uprising against Napoleon". Central European History 30#2 (1997): 187-220.
- Jusdanis, Gregory (2001). The Necessary Nation. Princeton UP. ISBN 0-691-08902-7.
- Kesselman, Mark (2009). European Politics in Transition. Boston: Houghton Mifflin Company. ISBN 0-618-87078-4.
- Motyl, Alexander J. (2001). Encyclopedia of Nationalism, Volume II. Academic Press. ISBN 0-12-227230-7.
- Pinson, K.S. Pietism as a Factor in the Rise of German Nationalism (Columbia UO, 1934).
- Samson, James (2002). The Cambridge History of Nineteenth-Century Music. Cambridge UP. ISBN 0-521-59017-5.
- Schulze, Hagen. The Course of German Nationalism: From Frederick the Great to Bismarck 1763-1867 (Cambridge UP, 1991).
- Seton-Watson, Hugh (1977). Nations and states: an enquiry into the origins of nations and the politics of nationalism. Methuen & Co. Ltd. ISBN 0-416-76810-5.
- Smith, Anthony D. (2010). Nationalism. Cambridge, England, UK; Malden, Massachusetts, USA: Polity Press. ISBN 0-19-289260-6.
- Smith, Helmut Walser. German nationalism and religious conflict: culture, ideology, politics, 1870-1914 (Princeton UP, 2014).
- Verheyen, Dirk (1999). The German question: A Cultural, Historical, and Geopolitical Exploration. Westview Press. ISBN 0-8133-6878-2.
|
As comets blaze across the night sky, they can bring wonder and excitement to those watching from Earth – or even a sense of impending doom.
In the past, people debated what comets even are – an atmospheric phenomenon, a fire in the sky, a star with a broom-like tail?
You’ll get a chance to see which visual description you think fits best this month: Comet 46P/Wirtanen is expected to make an appearance in mid-December that may well be visible even to the naked eye.
Through Edmond Halley’s study in the 17th century of what became known as Halley’s comet, astronomers realized comets are within our solar system. They have highly elliptical or elongated orbits around the sun. Some have orbits that extend well beyond Pluto while some stay relatively close.
When comets are farther out in the solar system, they’re not much to look at. They’re often compared to dirty snowballs. But unlike a rocky asteroid, a comet also has volatile frozen gases such as methane, carbon monoxide, carbon dioxide and ammonia along with their nucleus of rock, ice and dust.
As a comet gets closer to the sun, heat causes the comet’s volatile elements to turn from solid into gas in a process called sublimation. As water, methane, carbon dioxide, and ammonia are released, it creates the tail comets are known for, as well as a bright cloud called a coma around its nucleus.
Comets actually have two distinct tails: one a dust tail, the other an ion or gas tail. Solar wind and radiation pressure push the tails away from the sun. Ultraviolet light ionizes some of the tail material, creating a charged gas that interacts with the charged solar wind and ends up pointing directly away from the sun. The noncharged dust tail still follows the comet’s orbit, resulting in a more curved tail.
As a comet goes through this process, it will brighten, making for a great show for stargazers – or rather, cometgazers. Predicting how bright a comet will be is notoriously difficult though, since it’s never clear exactly how the gases will behave. Even measuring the brightness is tricky. Unlike the way a star’s brightness is concentrated into a single point from our perspective on Earth, a comet’s brightness is diffused over a larger area.
Astronomer Carl Wirtanen discovered his namesake comet in 1948. He was a skilled object hunter and used photos of the night sky to spot the quickly moving object, at least astronomically speaking.
Comet 46P/Wirtanen’s orbit keeps it pretty near to the sun. Its aphelion, or farthest point from the sun, is about 5.1 astronomical units (AU), which is just a tad bigger than Jupiter’s orbit. Its perihelion, or closest approach to the sun, is about 1 AU, just about the Earth’s distance from the sun. This path takes about 5.4 years to complete, meaning it comes back into view quite frequently compared to other famous comets.
Right now, it is approaching its perihelion. Its closest point to the sun will fall on Dec. 16 – which is why it will be brightest on this day.
Comet 46P/Wirtanen is a particularly active comet – called a hyperactive comet – and tends to be brighter than other comets of a similar size. This makes it a good candidate for viewing. Predictions suggest it will be as bright as a magnitude 3, which is a little brighter than the dimmest star in the Big Dipper, Megrez. However, there are some predictions that keep it beyond naked eye visibility at a brightest magnitude of only 7.6. The dimmest object visible with the naked human eye is magnitude 6, under perfect observing conditions.
If those magnitudes seem a little off, it’s because astronomers use a backwards system. The smaller the number, the brighter the object.
To try to see this comet, get to as dark a sky as you can on Dec. 16, when it will be at its brightest. It will be between the constellation Taurus and the Pleiades star cluster.
If you cannot see Comet 46P/Wirtanen with your naked eye, use binoculars or a small telescope to catch a glimpse. The comet is already in the sky, but requires a telescope. You can start following now using maps showing its position night by night. Its location in the sky also means it is visible for all but Earth’s extreme southernmost latitudes.
The comet’s position near Taurus makes it ideal for spotting all night long. Taurus is just in the east after the sunset and moves toward the west throughout the night.
May you have clear skies for observing. You can decide for yourself whether this comet will be an omen of good or bad luck for 2019.
|
BPS District Computer Science and Cybersecurity Standards Book
K-12 Grade Levels
CSC-12.TS TS Technology Systems
(NI) Network & Internet - Networks link computers and devices locally and around the world allowing people to access and communicate information.
(HS) Hardware & Software - Devices, hardware, and software work together as a system to accomplish tasks..
(T) Troubleshooting - Strategies for solving technology system problems.
- CSC-12.TS_T.1 Implement systematic troubleshooting strategies to identify and fix errors.
CSC-12.CT CT Computational Thinking
(PSA) Problem Solving & Algorithms - Strategies for understanding and solving problems.
(DCA) Data Creation & Analysis - Data can be collected, used, and presented with computing devices or digital tools.
(DD) Development & Design - Design processes to create new, useful, and imaginative solutions to problems.
CSC-12.IL IL Information Literacy
(A) Access - Effective search strategies can locate information for intellectual or creative pursuits.
(E) Evaluate - Information sources can be evaluated for accuracy, currency, appropriateness, and purpose.
(C) Create - It is important to both consume and produce information to be digitally literate.
(IP) Intellectual Property - Respect for the rights and obligations of using and sharing intellectual property.
- CSC-12.IL_A.1 Build knowledge by actively exploring real-world issues and problems, developing ideas and theories and pursuing answers and solutions.
- CSC-12.IL_E.1 Explain source selection based on accuracy, perspective, credibility, and relevance of information, media, data, or other resources.
- CSC-12.IL_C.1 Exhibit perseverance, a tolerance for ambiguity, and the capacity to work with open-ended problems in the design and creation process.
- CSC-12.IL_IP.1 Debate laws and regulations that impact the development and use of software.
- CSC-12.IL_IP.2 Cite sources in a standard format to ethically reference the intellectual property of others. (Continued growth)
- CSC-12.IL_IP.3 Evaluate the social and economic implications of piracy and plagiarism in the context of safety, law, or ethics. (Continued growth)
CSC-12.CS CS Computers in Society
(IC) Impacts of Computing - Past, present, and possible future impact of technology on society.
(SI) Social Interactions - Technology facilitates collaboration with others.
CSC-12.DC DC Digital Citizenship
(SE) Safety & Ethics - There are both positive and negative impacts in social and ethical behaviors for using technology.
(RU) Responsible Use - Respect and dignity in virtual communities.
(DI) Digital Identity - Responsibilities and opportunities of living, learning and working in an interconnected digital world.
- CSC-12.DC_SE.1 Understand encryption and how it is used to protect data. (CYSEC) (Continued Growth)
- CSC-12.DC_SE.2 Illustrate how sensitive data can be affected by malware and other attacks. (CYSEC)
- CSC-12.DC_SE.3 Manage personal data to maintain digital privacy and security and are aware of data-collection technology used to track online behaviors. (CYSEC) Continued Growth)
- CSC-12.DC_SE.4 Develop a plan to recover from an incident that was tied to unauthorized access. (CYSEC) (Continued Growth)
- CSC-12.DC_RU.1 Apply cyberbullying prevention strategies. (Continued growth)
- CSC-12.DC_RU.2 Apply safe and ethical behaviors to personal electronic communication and interaction. (CYSEC) (Continued growth)
- CSC-12.DC_RU.3 Use appropriate digital etiquette in a variety of situations. (Continued growth)
- CSC-12.DC_RU.4 Understand the purpose of and comply with Acceptable Use Policies.
- CSC-12.DC_DI.1 Manage a digital identity and be aware of the permanence of actions in the digital world. (CYSEC) (Continued growth)
|
What is Network?
“A network is defined as two or more computers linked together for the purpose of communicating and sharing information and other resources”.
Basic Requirements of a Network:
Connections include the hardware (physical components) required to hook up a computer to the network. Two terms are important to network connections:
The network medium: The network hardware that physically connects one computer to another. This is the cable between the computers.
The network interface: The hardware that attaches a computer to the network medium and acts as an interpreter between the computer and the network. Attaching a computer to a network requires an add-in board known as a network interface card (NIC).
Communications establish the rules concerning how computers talk and understand each other. Because computers often run different software, in order to communicate with each other they must speak a "shared language." Without shared communications, computers cannot exchange information, and remain isolated.
A service defines those things a computer shares with the rest of the network. For example, a computer can share a printer or specific directories or files. Unless computers on the network are capable of sharing resources, they remain isolated, even though physically connected.
Types of Network:
There are two types of network.
LAN (LOCAL AREA NETWORK):
A LAN (local area network) is a network that covers a limited distance (usually a single site or facility) and allows sharing of information and resources. A LAN can be as simple as two connected computers, or as complicated as a large site. This type of network is very popular because it allows individual computers to provide processing power and utilize their own memory, while programs and data can be stored on any computer in the network. Some of the older LANs also include configurations that rely totally on the power of a mini or mainframe computer (a server) to do all the work. In this case, the workstations are no more than "dumb" terminals (a keyboard and a monitor). With the increased power of today's personal computer, these types of networks are rare.
WAN (Wide Area Networks):
A wide area network (WAN) spans relatively large geographical areas. Connections for these sites require the use of ordinary telephone lines, T1 lines, ISDN (Integrated Services Digital Network) lines, radio waves, or satellite links. WANs can be accessed through dial-up connections, using a modem, or leased line direct connection. The leased-line method is more expensive but can be cost-effective for transmission of large volumes of data.
Physical layout of network is called network topology.
There are three types of network.
Ring Topology Bus Topology Star Topology
In a star network, all devices are connected to a central point called a hub. These hubs collect and distribute the flow of data within the network. Signals from the sending computer go to the hub and are then transmitted to all computers on the network. Large networks can feature several hubs. A star network is easy to troubleshoot because all information goes through the hub, making it easier to isolate problems.
In a bus network, all devices are connected to a single linear cable called a trunk (also known as a backbone or segment). Both ends of the cable must be terminated (like a SCSI bus) to stop the signal from bouncing. Because a bus network does not have a central point, it is more difficult to troubleshoot than a star network. A break or problem at any point along the bus can cause the entire network to go down.
In a ring network, all workstations and servers are connected in a closed loop. There are no terminating ends; therefore, if one computer fails, the entire network will go down. Each computer in the network acts like a repeater and boosts the signal before sending it to the next station. This type of network transmits data by passing a "token" around the network. If the token is free of data, a computer waiting to send data grabs it, attaches the data and the electronic address to the token, and sends it on its way. When the token reaches its destination computer, the data is removed and the token sent on.
Network cable and connectors:
All networks need cables. The three main types are twisted-pair cable (TP), coaxial cable, and fiber-optic cable (FDDI—Fiber Distributed Data Interface).
Twisted-pair cable, shown in the picture consists of two insulated strands of copper wire twisted around each other to form a pair. One or more twisted pairs are used in a twisted-pair cable. The purpose of twisting the wires is to eliminate electrical interference from other wires and outside sources such as motors. By twisting the wires, any electrical noise from the adjacent pair will be canceled. The more twists per linear foot, the greater the effect.
Twisted-pair wiring comes in two types: shielded (STP) and unshielded (UTP). STP has a foil or wire braid wrapped around the individual wires of the pairs; UTP does not. The STP cable uses a woven-copper braided jacket, which is a higher-quality, more protective jacket than UTP.
Twisted pair cable
Of the two types, UTP is the most common. UTP cables can be further divided into five categories:
Category 1: Traditional telephone cable. Carries voice but not data.
Category 2: Certified UTP for data transmission of up to 4 Mbps (megabits per second). It has four twisted pairs.
Category 3: Certified UTP for data transmission of up to 10 Mbps. It has four twisted pairs.
Category 4: Certified UTP for data transmissions up to 16 Mbps. It has four twisted pairs.
Category 5: Certified for data transmissions up to 100 Mbps. It has four twisted pairs of copper wire.
Twisted-pair cable has several advantages over other types of cable (coaxial and fiber-optic)—it is readily available, easy to install, and inexpensive. Among its disadvantages are its sensitivity to EMI (electromagnetic interference) and susceptibility to eavesdropping; it does not support communication at distances of greater than 100 feet; and it requires the addition of a hub (a multiple network connection point) if it is to be used with more than two computers.
CAT5 Cabling Issues
Ethernet networks use unshielded twisted pair (UTP) Category 5 cable. CAT5 cable runs should not exceed 100 meters.
Coaxial cable is made of two conductors that share the same axis; the center is a copper wire that is insulated by a plastic coating and then wrapped with an outer conductor (usually a wire braid). This outer conductor around the insulation serves as electrical shielding for the signal being carried by the inner conductor. Outside the outer conductor is a tough insulating plastic tube that provides physical and electrical protection. At one time, coaxial cable was the most widely used network cabling. However, with improvements and the lower cost of twisted-pair cables, it has lost its popularity.
Coaxial cable is found in two types: thin (ThinNet) and thick (ThickNet). Of the two, ThinNet is the easiest to use. It is about one-quarter of an inch in diameter, making it flexible and easy to work with (it is similar to the material commonly used for cable TV). ThinNet can carry a signal about 605 feet (185 meters) before the signal strength begins to suffer. ThickNet, on the other hand, is about three-eighths of an inch in diameter. This makes it a better conductor—it can carry a signal about 1,640 feet (500 meters) before signal strength begins to suffer. The disadvantage of ThickNet over ThinNet is that it is more difficult to work with. The ThickNet version is also known as standard Ethernet cable.
When compared to twisted-pair, coaxial cable is the better choice even though it costs more. It is a standard technology that resists rough treatment and EMI. Although more resistant, it is still susceptible to EMI and eavesdropping.
Use coaxial cable if you need:
A medium that can transmit voice, video, and data.
To transmit data longer distances than less-expensive cabling.
A familiar technology that offers reasonable data security.
A Mixed-Cable System
Many networks use both twisted-pair and coaxial cable. Twisted-pair cable is used on a per-floor basis to run wires to individual workstations. Coaxial cable is used to wire multiple floors together. Coaxial cable should also be considered for a small network because you can purchase prefabricated cables (with end connectors installed) in various lengths.
Fiber-optic cable is made of light-conducting glass or plastic fibers. It can carry data signals in the form of modulated pulses of light. The plastic-core cables are easier to install, but do not carry signals as far as glass-core cables. Multiple fiber cores can be bundled in the center of the protective tubing.
When both material and installation costs are taken into account, fiber-optic cable can prove to be no more expensive than twisted-pair or coaxial cable. Fiber has some advantages over copper wire; it is immune to EMI and detection outside the cable and provides a reliable and secure transmission media. It also supports very high bandwidths (the amount of information the cable can carry), so it can handle thousands of times more data than twisted-pair or coaxial cable.
Cable lengths can run from .25 to 2.0 kilometers depending on the fiber-optic cable and network. If you need to network multiple buildings, this should be the cable of choice. Fiber-optic cable systems require the use of fiber-compatible NICs.
There are two different types of RJ-45 connectors. There is the "bent tyne" connector intended for use with solid core CAT5, and then there is the "aligned tyne" connector for use with stranded CAT5 cable. Errors have popped up when using incorrect cable/connector combinations. The "bent tyne" connector will work just fine on stranded wire by the way, just not the other way around. In general, make sure your connector matches you cable type...
Standards set forth by EIA/TIA 568A/568B and AT&T 258A define the acceptable wiring and color-coding schemes for CAT5 cables.
ST or Straight Connector
MTRJ Connector MTP Connector
Duplex SC Connector
There are two common types of fiber optic connectors: SC and ST. The ST or "straight tip" connector is the most common connector used with fiber optic cable, although this is no longer the case for use with Ethernet. It is barrel shaped, similar to a BNC connector, and was developed by AT&T. A newer connector, the SC, is becoming more and more popular. It has a squared face and is thought to be easier to connect in a confined space. The SC is the connector type found on most Ethernet switch fiber modules and is the connector of choice for 100Mbit and Gigabit Ethernet. A duplex version of the SC connector is also available, which is keyed to prevent the TX and RX fibers being incorrectly connected.
There are two more fiber connectors that we may see more of in the future. These are the MTRJ and MTP. They are both duplex connectors and are approximately the size of an RJ45 connector.
Network Interface Cards:
Network interface cards (NICs) link a computer to the network cable system. They provide the physical connection between the computer's expansion bus and the network cabling.
Installation of the network interface card is the same as for any other expansion card. It requires setup of the system resources: IRQ, address, and software. Most cards today allow connection
for either thin Ethernet or UTP (unshielded twisted-pair) cabling. Thin Ethernet uses a round BNC connector, and UTP uses a RJ-45 connector (similar to a telephone jack).
Network interface card
Installing a NIC is just like installing any other expansion card. If you are installing a Windows 95-compliant Plug and Play card in a Windows 95 or Windows 98 machine, you'll simply need to physically install the card and boot up the computer. The card will be detected and, more than likely, install itself. You might only need to answer a few questions along the way. It requires a little more work to install a NIC in an operating system that is not Plug and Play-compliant. Installing network cards includes the following steps:
Be sure to document any changes that you make to the existing computer. This will eliminate any confusion in the installation process and provide future reference in case of problems.
Determine whether the card needs IRQ, DMA (direct memory access), or address settings. Remember that you might have to configure these manually, so be sure to check the card's documentation for default settings and instructions for how to make any needed changes.
Determine whether the necessary settings are available on the machine on which they will be installed. If proper documentation is not available, use diagnostic software such as Microsoft Diagnostics (MSD) to determine settings. Also check your AUTOEXEC.BAT, CONFIG.SYS, and SYSTEM.INI files; they might give clues as to which settings are already in use.
Turn off the machine and remove the cover. Be sure to take all appropriate measures for protection against electrostatic discharge (ESD).
Set the NICs jumpers or DIPP (dual inline package) switches as necessary and insert the card.
Turn on the machine and run the setup utility provided by the manufacturer. If you are using Windows 95, Windows 98, or Windows 2000, and the NIC is not Plug and Play, you can use the Add New Hardware wizard in the Control Panel to install the drivers and set up the card. (Remember to document all settings.)
A network protocol is a set of rules that govern the way computers communicate over a network. In order for computers using different software to communicate, they must follow the same set of networking rules and agreements, called protocols. A protocol is like a language; unless both computers are speaking and listening in the same language, no communication will take place.
Networking protocols are grouped according to their functions, such as sending and receiving messages from the NIC, or talking to the computer hardware and making it possible for applications to function in a network. Early computer networks had manufacturer-unique inflexible hardware and strict protocols. Today's protocols are designed to be open, which means they are not vendor-, hardware-, or software-specific. Protocols are generically referred to as protocol families or protocol suites because they tend to come in groups (usually originating from specific vendors).
To see the network components, including protocols, which are associated with a network connection, open the Network Connections folder, right click, the connection, and select Properties.
Here are the components that XP/2000 installs by default:
To see the settings for a particular protocol, click the protocol and then click Properties.
By default, XP/2000 configures TCP/IP to obtain an IP address automatically. If there's a DHCP server on the network, it will assign the IP address and other TCP/IP settings to the connection. Otherwise, Windows XP/2000 will use Automatic Private IP Addressing to assign an IP address to the connection.
This default configuration should work, unchanged, to connect a Windows XP computer to a network that uses TCP/IP for File and Printer Sharing in these common configurations:
One computer on the network is running Internet sharing software, such as Internet Connection Sharing, and provides a DHCP server for assigning TCP/IP settings to the other computers.
A hardware router provides shared Internet access and a DHCP server.
All computers run either Windows 98, 98SE, Me, 2000, or XP, with no DHCP server. The computers can use Automatic Private IP Addressing to assign themselves compatible IP addresses.
Using an Internet sharing program or a hardware router protects the local area network from access by other Internet users, so it's safe to use TCP/IP for File and Printer Sharing on the LAN. The computers have private IP addresses that aren't accessible from the Internet. No other protocol is needed.
If your network uses static IP addresses, click Use the following IP address and enter the configuration information. For example, here are possible settings for a network that uses a proxy server at IP address 192.168.1.1 for Internet access.
|
|OSI layer||Network layer|
|Internet protocol suite|
Internet Protocol version 4 (IPv4) is the fourth version of the Internet Protocol (IP). It is one of the core protocols of standards-based internetworking methods in the Internet and other packet-switched networks. IPv4 was the first version deployed for production in the ARPANET in 1983. It still routes most Internet traffic today, despite the ongoing deployment of a successor protocol, IPv6. IPv4 is described in IETF publication RFC 791 (September 1981), replacing an earlier definition (RFC 760, January 1980).
IPv4 uses a 32-bit address space which provides 4,294,967,296 (232) unique addresses, but large blocks are reserved for special networking methods.
The Internet Protocol is the protocol that defines and enables internetworking at the internet layer of the Internet Protocol Suite. In essence it forms the Internet. It uses a logical addressing system and performs routing, which is the forwarding of packets from a source host to the next router that is one hop closer to the intended destination host on another network.
IPv4 is a connectionless protocol, and operates on a best effort delivery model, in that it does not guarantee delivery, nor does it assure proper sequencing or avoidance of duplicate delivery. These aspects, including data integrity, are addressed by an upper layer transport protocol, such as the Transmission Control Protocol (TCP).
IPv4 uses 32-bit addresses which limits the address space to 4294967296 (232) addresses.
IPv4 addresses may be represented in any notation expressing a 32-bit integer value. They are most often written in dot-decimal notation, which consists of four octets of the address expressed individually in decimal numbers and separated by periods.
For example, the quad-dotted IP address 192.0.2.235 represents the 32-bit decimal number 3221226219, which in hexadecimal format is 0xC00002EB. This may also be expressed in dotted hex format as 0xC0.0x00.0x02.0xEB, or with octal byte values as 0300.0000.0002.0353.
CIDR notation combines the address with its routing prefix in a compact format, in which the address is followed by a slash character (/) and the count of consecutive 1 bits in the routing prefix (subnet mask).
Other address representations were in common use when classful networking was practiced. For example, the loopback address 127.0.0.1 is commonly written as 127.1, given that it belongs to a class-A network with eight bits for the network mask and 24 bits for the host number. When fewer than four numbers are specified in the address in dotted notation, the last value is treated as an integer of as many bytes as are required to fill out the address to four octets. Thus, the address 127.65530 is equivalent to 127.0.255.250.
In the original design of IPv4, an IP address was divided into two parts: the network identifier was the most significant octet of the address, and the host identifier was the rest of the address. The latter was also called the rest field. This structure permitted a maximum of 256 network identifiers, which was quickly found to be inadequate.
To overcome this limit, the most-significant address octet was redefined in 1981 to create network classes, in a system which later became known as classful networking. The revised system defined five classes. Classes A, B, and C had different bit lengths for network identification. The rest of the address was used as previously to identify a host within a network. Because of the different sizes of fields in different classes, each network class had a different capacity for addressing hosts. In addition to the three classes for addressing hosts, Class D was defined for multicast addressing and Class E was reserved for future applications.
Dividing existing classful networks into subnets began in 1985 with the publication of RFC 950. This division was made more flexible with the introduction of variable-length subnet masks (VLSM) in RFC 1109 in 1987. In 1993, based on this work, RFC 1517 introduced Classless Inter-Domain Routing (CIDR), which expressed the number of bits (from the most significant) as, for instance, /24, and the class-based scheme was dubbed classful, by contrast. CIDR was designed to permit repartitioning of any address space so that smaller or larger blocks of addresses could be allocated to users. The hierarchical structure created by CIDR is managed by the Internet Assigned Numbers Authority (IANA) and the regional Internet registries (RIRs). Each RIR maintains a publicly searchable WHOIS database that provides information about IP address assignments.
The Internet Engineering Task Force (IETF) and IANA have restricted from general use various reserved IP addresses for special purposes. Notably these addresses are used for multicast traffic and to provide addressing space for unrestricted uses on private networks.
Special address blocks Address block Address range Number of addresses Scope Description 0.0.0.0/8 0.0.0.0–0.255.255.255 16777216 Software Current network (only valid as source address). 10.0.0.0/8 10.0.0.0–10.255.255.255 16777216 Private network Used for local communications within a private network. 100.64.0.0/10 100.64.0.0–100.127.255.255 4194304 Private network Shared address space for communications between a service provider and its subscribers when using a carrier-grade NAT. 127.0.0.0/8 127.0.0.0–127.255.255.255 16777216 Host Used for loopback addresses to the local host. 169.254.0.0/16 169.254.0.0–169.254.255.255 65536 Subnet Used for link-local addresses between two hosts on a single link when no IP address is otherwise specified, such as would have normally been retrieved from a DHCP server. 172.16.0.0/12 172.16.0.0–172.31.255.255 1048576 Private network Used for local communications within a private network. 192.0.0.0/24 192.0.0.0–18.104.22.168 256 Private network IETF Protocol Assignments. 192.0.2.0/24 192.0.2.0–192.0.2.255 256 Documentation Assigned as TEST-NET-1, documentation and examples. 22.214.171.124/24 126.96.36.199–188.8.131.52 256 Internet Reserved. Formerly used for IPv6 to IPv4 relay (included IPv6 address block 2002::/16). 192.168.0.0/16 192.168.0.0–192.168.255.255 65536 Private network Used for local communications within a private network. 198.18.0.0/15 198.18.0.0–198.19.255.255 131072 Private network Used for benchmark testing of inter-network communications between two separate subnets. 198.51.100.0/24 198.51.100.0–198.51.100.255 256 Documentation Assigned as TEST-NET-2, documentation and examples. 203.0.113.0/24 203.0.113.0–203.0.113.255 256 Documentation Assigned as TEST-NET-3, documentation and examples. 184.108.40.206/4 220.127.116.11–18.104.22.168 268435456 Internet In use for IP multicast. (Former Class D network). 240.0.0.0/4 240.0.0.0–255.255.255.254 268435455 Internet Reserved for future use. (Former Class E network). 255.255.255.255/32 255.255.255.255 1 Subnet Reserved for the "limited broadcast" destination address.
Of the approximately four billion addresses defined in IPv4, about 18 million addresses in three ranges are reserved for use in private networks. Packets addresses in these ranges are not routable in the public Internet; they are ignored by all public routers. Therefore, private hosts cannot directly communicate with public networks, but require network address translation at a routing gateway for this purpose.
Reserved private IPv4 network ranges Name CIDR block Address range Number of addresses Classful description 24-bit block 10.0.0.0/8 10.0.0.0 – 10.255.255.255 16777216 Single Class A. 20-bit block 172.16.0.0/12 172.16.0.0 – 172.31.255.255 1048576 Contiguous range of 16 Class B blocks. 16-bit block 192.168.0.0/16 192.168.0.0 – 192.168.255.255 65536 Contiguous range of 256 Class C blocks.
Since two private networks, e.g., two branch offices, cannot directly interoperate via the public Internet, the two networks must be bridged across the Internet via a virtual private network (VPN) or an IP tunnel, which encapsulates packets, including their headers containing the private addresses, in a protocol layer during transmission across the public network. Additionally, encapsulated packets may be encrypted for transmission across public networks to secure the data.
RFC 3927 defines the special address block 169.254.0.0/16 for link-local addressing. These addresses are only valid on the link (such as a local network segment or point-to-point connection) directly connected to a host that uses them. These addresses are not routable. Like private addresses, these addresses cannot be the source or destination of packets traversing the internet. These addresses are primarily used for address autoconfiguration (Zeroconf) when a host cannot obtain an IP address from a DHCP server or other internal configuration methods.
When the address block was reserved, no standards existed for address autoconfiguration. Microsoft created an implementation called Automatic Private IP Addressing (APIPA), which was deployed on millions of machines and became a de facto standard. Many years later, in May 2005, the IETF defined a formal standard in RFC 3927, entitled Dynamic Configuration of IPv4 Link-Local Addresses.
The class A network 127.0.0.0 (classless network 127.0.0.0/8) is reserved for loopback. IP packets whose source addresses belong to this network should never appear outside a host. Packets received on a non-loopback interface with a loopback source or destination address must be dropped.
First and last subnet addresses
The first address in a subnet is used to identify the subnet itself. In this address all host bits are 0. To avoid ambiguity in representation, this address is reserved. The last address has all host bits set to 1. It is used as a local broadcast address for sending messages to all devices on the subnet simultaneously. For networks of size /24 or larger, the broadcast address always ends in 255.
For example, in the subnet 192.168.5.0/24 (subnet mask 255.255.255.0) the identifier 192.168.5.0 is used to refer to the entire subnet. The broadcast address of the network is 192.168.5.255.
|Binary form||Dot-decimal notation|
|In red, is shown the host part of the IP address; the other part is the network prefix. The host gets inverted (logical NOT), but the network prefix remains intact.|
However, this does not mean that every address ending in 0 or 255 cannot be used as a host address. For example, in the /16 subnet 192.168.0.0/255.255.0.0, which is equivalent to the address range 192.168.0.0–192.168.255.255, the broadcast address is 192.168.255.255. One can use the following addresses for hosts, even though they end with 255: 192.168.1.255, 192.168.2.255, etc. Also, 192.168.0.0 is the network identifier and must not be assigned to an interface. The addresses 192.168.1.0, 192.168.2.0, etc., may be assigned, despite ending with 0.
In the past, conflict between network addresses and broadcast addresses arose because some software used non-standard broadcast addresses with zeros instead of ones.
In networks smaller than /24, broadcast addresses do not necessarily end with 255. For example, a CIDR subnet 203.0.113.16/28 has the broadcast address 203.0.113.31.
|Binary form||Dot-decimal notation|
|In red, is shown the host part of the IP address; the other part is the network prefix. The host gets inverted (logical NOT), but the network prefix remains intact.|
As a special case, a /31 network has capacity for just two hosts. These networks are typically used for point-to-point connections. There is no network identifier or broadcast address for these networks.
Hosts on the Internet are usually known by names, e.g., www.example.com, not primarily by their IP address, which is used for routing and network interface identification. The use of domain names requires translating, called resolving, them to addresses and vice versa. This is analogous to looking up a phone number in a phone book using the recipient's name.
The translation between addresses and domain names is performed by the Domain Name System (DNS), a hierarchical, distributed naming system that allows for the subdelegation of namespaces to other DNS servers.
Address space exhaustion
Since the 1980s, it was apparent that the pool of available IPv4 addresses was depleting at a rate that was not initially anticipated in the original design of the network. The main market forces that accelerated address depletion included the rapidly growing number of Internet users, who increasingly used mobile computing devices, such as laptop computers, personal digital assistants (PDAs), and smart phones with IP data services. In addition, high-speed Internet access was based on always-on devices. The threat of exhaustion motivated the introduction of a number of remedial technologies, such as Classless Inter-Domain Routing (CIDR) methods by the mid-1990s, pervasive use of network address translation (NAT) in network access provider systems, and strict usage-based allocation policies at the regional and local Internet registries.
The primary address pool of the Internet, maintained by IANA, was exhausted on 3 February 2011, when the last five blocks were allocated to the five RIRs. APNIC was the first RIR to exhaust its regional pool on 15 April 2011, except for a small amount of address space reserved for the transition technologies to IPv6, which is to be allocated under a restricted policy.
The long-term solution to address exhaustion was the 1998 specification of a new version of the Internet Protocol, IPv6. It provides a vastly increased address space, but also allows improved route aggregation across the Internet, and offers large subnetwork allocations of a minimum of 264 host addresses to end users. However, IPv4 is not directly interoperable with IPv6, so that IPv4-only hosts cannot directly communicate with IPv6-only hosts. With the phase-out of the 6bone experimental network starting in 2004, permanent formal deployment of IPv6 commenced in 2006. Completion of IPv6 deployment is expected to take considerable time, so that intermediate transition technologies are necessary to permit hosts to participate in the Internet using both versions of the protocol.
An IP packet consists of a header section and a data section. An IP packet has no data checksum or any other footer after the data section. Typically the link layer encapsulates IP packets in frames with a CRC footer that detects most errors, many transport-layer protocols carried by IP also have their own error checking.
The IPv4 packet header consists of 14 fields, of which 13 are required. The 14th field is optional and aptly named: options. The fields in the header are packed with the most significant byte first (big endian), and for the diagram and discussion, the most significant bits are considered to come first (MSB 0 bit numbering). The most significant bit is numbered 0, so the version field is actually found in the four most significant bits of the first byte, for example.
|8||64||Time To Live||Protocol||Header Checksum|
|12||96||Source IP Address|
|16||128||Destination IP Address|
|20||160||Options (if IHL > 5)|
- The first header field in an IP packet is the four-bit version field. For IPv4, this is always equal to 4.
- Internet Header Length (IHL)
The IPv4 header is variable in size due to the optional 14th field (options). The IHL field contains the size of the IPv4 header, it has 4 bits that specify the number of 32-bit words in the header. The minimum value for this field is 5, which indicates a length of 5 × 32 bits = 160 bits = 20 bytes. As a 4-bit field, the maximum value is 15, this means that the maximum size of the IPv4 header is 15 × 32 bits, or 480 bits = 60 bytes.
- Differentiated Services Code Point (DSCP)
- Originally defined as the type of service (ToS), this field specifies differentiated services (DiffServ) per RFC 2474 (updated by RFC 3168 and RFC 3260). New technologies are emerging that require real-time data streaming and therefore make use of the DSCP field. An example is Voice over IP (VoIP), which is used for interactive voice services.
- Explicit Congestion Notification (ECN)
- This field is defined in RFC 3168 and allows end-to-end notification of network congestion without dropping packets. ECN is an optional feature that is only used when both endpoints support it and are willing to use it. It is effective only when supported by the underlying network.
- Total Length
- This 16-bit field defines the entire packet size in bytes, including header and data. The minimum size is 20 bytes (header without data) and the maximum is 65,535 bytes. All hosts are required to be able to reassemble datagrams of size up to 576 bytes, but most modern hosts handle much larger packets. Sometimes links impose further restrictions on the packet size, in which case datagrams must be fragmented. Fragmentation in IPv4 is handled in either the host or in routers.
- This field is an identification field and is primarily used for uniquely identifying the group of fragments of a single IP datagram. Some experimental work has suggested using the ID field for other purposes, such as for adding packet-tracing information to help trace datagrams with spoofed source addresses, but RFC 6864 now prohibits any such use.
- A three-bit field follows and is used to control or identify fragments. They are (in order, from most significant to least significant):
- bit 0: Reserved; must be zero.[note 1]
- bit 1: Don't Fragment (DF)
- bit 2: More Fragments (MF)
- Fragment Offset
The fragment offset field is measured in units of eight-byte blocks. It is 13 bits long and specifies the offset of a particular fragment relative to the beginning of the original unfragmented IP datagram. The first fragment has an offset of zero. This allows a maximum offset of (213 – 1) × 8 = 65,528 bytes, which would exceed the maximum IP packet length of 65,535 bytes with the header length included (65,528 + 20 = 65,548 bytes).
- Time To Live (TTL)
- An eight-bit time to live field helps prevent datagrams from persisting (e.g. going in circles) on an internet. This field limits a datagram's lifetime. It is specified in seconds, but time intervals less than 1 second are rounded up to 1. In practice, the field has become a hop count—when the datagram arrives at a router, the router decrements the TTL field by one. When the TTL field hits zero, the router discards the packet and typically sends an ICMP Time Exceeded message to the sender. The program traceroute uses these ICMP Time Exceeded messages to print the routers used by packets to go from the source to the destination.
- This field defines the protocol used in the data portion of the IP datagram. IANA maintains a list of IP protocol numbers as directed by RFC 790.
- Header Checksum
- The 16-bit IPv4 header checksum field is used for error-checking of the header. When a packet arrives at a router, the router calculates the checksum of the header and compares it to the checksum field. If the values do not match, the router discards the packet. Errors in the data field must be handled by the encapsulated protocol. Both UDP and TCP have checksum fields.
- Source address
- This field is the IPv4 address of the sender of the packet. Note that this address may be changed in transit by a network address translation device.
- Destination address
- This field is the IPv4 address of the receiver of the packet. As with the source address, this may be changed in transit by a network address translation device.
The options field is not often used. Note that the value in the IHL field must include enough extra 32-bit words to hold all the options (plus any padding needed to ensure that the header contains an integer number of 32-bit words). The list of options may be terminated with an EOL (End of Options List, 0x00) option; this is only necessary if the end of the options would not otherwise coincide with the end of the header. The possible options that can be put in the header are as follows:
|Copied||1||Set to 1 if the options need to be copied into all fragments of a fragmented packet.|
|Option Class||2||A general options category. 0 is for "control" options, and 2 is for "debugging and measurement". 1 and 3 are reserved.|
|Option Number||5||Specifies an option.|
|Option Length||8||Indicates the size of the entire option (including this field). This field may not exist for simple options.|
|Option Data||Variable||Option-specific data. This field may not exist for simple options.|
- Note: If the header length is greater than 5 (i.e., it is from 6 to 15) it means that the options field is present and must be considered.
- Note: Copied, Option Class, and Option Number are sometimes referred to as a single eight-bit field, the Option Type.
The table below shows the defined options for IPv4. Strictly speaking, the column labeled "Option Number" is actually the "Option Value" that is derived from the Copied, Option Class, and Option Number bits as defined above. However, since most people today refer to this combined bit set as the "option number," this table shows that common usage. The table shows both the decimal and the hexadecimal option numbers.
|Option Number||Option Name||Description|
|0 / 0x00||EOOL||End of Option List|
|1 / 0x01||NOP||No Operation|
|2 / 0x02||SEC||Security (defunct)|
|7 / 0x07||RR||Record Route|
|10 / 0x0A||ZSU||Experimental Measurement|
|11 / 0x0B||MTUP||MTU Probe|
|12 / 0x0C||MTUR||MTU Reply|
|15 / 0x0F||ENCODE||ENCODE|
|25 / 0x19||QS||Quick-Start|
|30 / 0x1E||EXP||RFC3692-style Experiment|
|68 / 0x44||TS||Time Stamp|
|82 / 0x52||TR||Traceroute|
|94 / 0x5E||EXP||RFC3692-style Experiment|
|130 / 0x82||SEC||Security (RIPSO)|
|131 / 0x83||LSR||Loose Source Route|
|133 / 0x85||E-SEC||Extended Security (RIPSO)|
|134 / 0x86||CIPSO||Commercial IP Security Option|
|136 / 0x88||SID||Stream ID|
|137 / 0x89||SSR||Strict Source Route|
|142 / 0x8E||VISA||Experimental Access Control|
|144 / 0x90||IMITD||IMI Traffic Descriptor|
|145 / 0x91||EIP||Extended Internet Protocol|
|147 / 0x93||ADDEXT||Address Extension|
|148 / 0x94||RTRALT||Router Alert|
|149 / 0x95||SDB||Selective Directed Broadcast|
|151 / 0x97||DPS||Dynamic Packet State|
|152 / 0x98||UMP||Upstream Multicast Pkt.|
|158 / 0x9E||EXP||RFC3692-style Experiment|
|205 / 0xCD||FINN||Experimental Flow Control|
|222 / 0xDE||EXP||RFC3692-style Experiment|
The packet payload is not included in the checksum. Its contents are interpreted based on the value of the Protocol header field.
Some of the common payload protocols are:
|Protocol Number||Protocol Name||Abbreviation|
|1||Internet Control Message Protocol||ICMP|
|2||Internet Group Management Protocol||IGMP|
|6||Transmission Control Protocol||TCP|
|17||User Datagram Protocol||UDP|
|89||Open Shortest Path First||OSPF|
|132||Stream Control Transmission Protocol||SCTP|
See List of IP protocol numbers for a complete list.
Fragmentation and reassembly
The Internet Protocol enables traffic between networks. The design accommodates networks of diverse physical nature; it is independent of the underlying transmission technology used in the link layer. Networks with different hardware usually vary not only in transmission speed, but also in the maximum transmission unit (MTU). When one network wants to transmit datagrams to a network with a smaller MTU, it may fragment its datagrams. In IPv4, this function was placed at the Internet Layer, and is performed in IPv4 routers, which thus require no implementation of any higher layers for the function of routing IP packets.
In contrast, IPv6, the next generation of the Internet Protocol, does not allow routers to perform fragmentation; hosts must determine the path MTU before sending datagrams.
When a router receives a packet, it examines the destination address and determines the outgoing interface to use and that interface's MTU. If the packet size is bigger than the MTU, and the Do not Fragment (DF) bit in the packet's header is set to 0, then the router may fragment the packet.
The router divides the packet into fragments. The max size of each fragment is the MTU minus the IP header size (20 bytes minimum; 60 bytes maximum). The router puts each fragment into its own packet, each fragment packet having following changes:
- The total length field is the fragment size.
- The more fragments (MF) flag is set for all fragments except the last one, which is set to 0.
- The fragment offset field is set, based on the offset of the fragment in the original data payload. This is measured in units of eight-byte blocks.
- The header checksum field is recomputed.
For example, for an MTU of 1,500 bytes and a header size of 20 bytes, the fragment offsets would be multiples of . These multiples are 0, 185, 370, 555, 740, ...
It is possible that a packet is fragmented at one router, and that the fragments are further fragmented at another router. For example, a packet of 4,520 bytes, including the 20 bytes of the IP header (without options) is fragmented to two packets on a link with an MTU of 2,500 bytes:
The total data size is preserved: 2480 bytes + 2020 bytes = 4500 bytes. The offsets are and .
On a link with an MTU of 1,500 bytes, each fragment results in two fragments:
Again, the data size is preserved: 1480 + 1000 = 2480, and 1480 + 540 = 2020.
Also in this case, the More Fragments bit remains 1 for all the fragments that came with 1 in them and for the last fragment that arrives, it works as usual, that is the MF bit is set to 0 only in the last one. And of course, the Identification field continues to have the same value in all re-fragmented fragments. This way, even if fragments are re-fragmented, the receiver knows they have initially all started from the same packet.
The last offset and last data size are used to calculate the total data size: .
A receiver knows that a packet is a fragment, if at least one of the following conditions is true:
- The flag "more fragments" is set, which is true for all fragments except the last.
- The field "fragment offset" is nonzero, which is true for all fragments except the first.
The receiver identifies matching fragments using the foreign and local address, the protocol ID, and the identification field. The receiver reassembles the data from fragments with the same ID using both the fragment offset and the more fragments flag. When the receiver receives the last fragment, which has the "more fragments" flag set to 0, it can calculate the size of the original data payload, by multiplying the last fragment's offset by eight, and adding the last fragment's data size. In the given example, this calculation was 495*8 + 540 = 4500 bytes.
When the receiver has all fragments, they can be reassembled in the correct sequence according to the offsets, to form the original datagram.
IP addresses are not tied in any permanent manner to hardware identifications and, indeed, a network interface can have multiple IP addresses in modern operating systems. Hosts and routers need additional mechanisms to identify the relationship between device interfaces and IP addresses, in order to properly deliver an IP packet to the destination host on a link. The Address Resolution Protocol (ARP) performs this IP-address-to-hardware-address translation for IPv4. (A hardware address is also called a MAC address.) In addition, the reverse correlation is often necessary. For example, when an IP host is booted or connected to a network it needs to determine its IP address, unless an address is preconfigured by an administrator. Protocols for such inverse correlations exist in the Internet Protocol Suite. Currently used methods are Dynamic Host Configuration Protocol (DHCP), Bootstrap Protocol (BOOTP) and, infrequently, reverse ARP.
- "BGP Analysis Reports". Retrieved 2013-01-09.
- "Understanding IP Addressing: Everything You Ever Wanted To Know" (PDF). 3Com. Archived from the original (PDF) on June 16, 2001.
- M. Cotton; L. Vegoda; R. Bonica; B. Haberman (April 2013). Special-Purpose IP Address Registries. Internet Engineering Task Force. doi:10.17487/RFC6890. BCP 153. RFC 6890. Updated by RFC 8190.
- Y. Rekhter; B. Moskowitz; D. Karrenberg; G. J. de Groot; E. Lear (February 1996). Address Allocation for Private Internets. Network Working Group. doi:10.17487/RFC1918. BCP 5. RFC 1918. Updated by RFC 6761.
- J. Weil; V. Kuarsingh; C. Donley; C. Liljenstolpe; M. Azinger (April 2012). IANA-Reserved IPv4 Prefix for Shared Address Space. Internet Engineering Task Force (IETF). doi:10.17487/RFC6598. ISSN 2070-1721. BCP 153. RFC 6598.
- S. Cheshire; B. Aboba; E. Guttman (May 2005). Dynamic Configuration of IPv4 Link-Local Addresses. Network Working Group. doi:10.17487/RFC3927. RFC 3927.
- J. Arkko; M. Cotton; L. Vegoda (January 2010). IPv4 Address Blocks Reserved for Documentation. Internet Engineering Task Force. doi:10.17487/RFC5737. ISSN 2070-1721. RFC 5737.
- O. Troan (May 2015). B. Carpenter (ed.). Deprecating the Anycast Prefix for 6to4 Relay Routers. Internet Engineering Task Force. doi:10.17487/RFC7526. BCP 196. RFC 7526.
- C. Huitema (June 2001). An Anycast Prefix for 6to4 Relay Routers. Network Working Group. doi:10.17487/RFC3068. RFC 3068. Obsoleted by RFC 7526.
- S. Bradner; J. McQuaid (March 1999). Benchmarking Methodology for Network Interconnect Devices. Network Working Group. doi:10.17487/RFC2544. RFC 2544. Updated by: RFC 6201 and RFC 6815.
- M. Cotton; L. Vegoda; D. Meyer (March 2010). IANA Guidelines for IPv4 Multicast Address Assignments. Internet Engineering Task Force. doi:10.17487/RFC5771. BCP 51. RFC 5771.
- J. Reynolds, ed. (January 2002). Assigned Numbers: RFC 1700 is Replaced by an On-line Database. Network Working Group. doi:10.17487/RFC3232. RFC 3232. Obsoletes RFC 1700.
- Jeffrey Mogul (October 1984). Broadcasting Internet Datagrams. Network Working Group. doi:10.17487/RFC0919. RFC 919.
- "RFC 923". IETF. June 1984. Retrieved 15 November 2019.
Special Addresses: In certain contexts, it is useful to have fixed addresses with functional significance rather than as identifiers of specific hosts. When such usage is called for, the address zero is to be interpreted as meaning "this", as in "this network".
- Robert Braden (October 1989). "Requirements for Internet Hosts – Communication Layers". IETF. p. 31. RFC 1122.
- Robert Braden (October 1989). "Requirements for Internet Hosts – Communication Layers". IETF. p. 66. RFC 1122.
- RFC 3021
- "World 'running out of Internet addresses'". Archived from the original on 2011-01-25. Retrieved 2011-01-23.
- Smith, Lucie; Lipner, Ian (3 February 2011). "Free Pool of IPv4 Address Space Depleted". Number Resource Organization. Retrieved 3 February 2011.
- ICANN,nanog mailing list. "Five /8s allocated to RIRs – no unallocated IPv4 unicast /8s remain".
- Asia-Pacific Network Information Centre (15 April 2011). "APNIC IPv4 Address Pool Reaches Final /8". Archived from the original on 7 August 2011. Retrieved 15 April 2011.
- "Internet Protocol, Version 6 (IPv6) Specification". tools.ietf.org. Retrieved 2019-12-13.
- RFC 3701, R. Fink, R. HInden, 6bone (IPv6 Testing Address Allocation) Phaseout (March 2004)
- 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (EmergiTech) : date, 3-6 Aug. 2016. University of Technology, Mauritius, Institute of Electrical and Electronics Engineers. Piscataway, NJ. ISBN 9781509007066. OCLC 972636788.CS1 maint: others (link)
- RFC 1726 section 6.2
- Postel, J. "Internet Protocol". tools.ietf.org. Retrieved 2019-03-12.
- Savage, Stefan. "Practical network support for IP traceback". Retrieved 2010-09-06.
- "Cisco unofficial FAQ". Retrieved 2012-05-10.
|Wikiversity has learning resources about IPv4|
- Internet Assigned Numbers Authority (IANA)
- IP, Internet Protocol — IP Header Breakdown, including specific options
- RFC 3344 — IPv4 Mobility
- IPv6 vs. carrier-grade NAT/squeezing more out of IPv4
- RIPE report on address consumption as of October 2003
- Official current state of IPv4 /8 allocations, as maintained by IANA
- Dynamically generated graphs of IPv4 address consumption with predictions of exhaustion dates—Geoff Huston
- IP addressing in China and the myth of address shortage
- Countdown of remaining IPv4 available addresses (estimated)
|
This term I am introducing geometrical reasoning in key stage 3 to Year 7. I was thinking about where to start the topic, what the key objectives should be and how I can challenge the various abilities.
We start the topic learning how to measure and draw acute and obtuse angles with a 180° protractor. Students have lots of practice to learn how to position the protractor correctly.
Understanding the types of angles is also key. If they can identify whether an angle is acute or obtuse they are less likely to use the wrong scale when measuring so 40° is not measured as 140°.
To challenge these students I would teach them how to draw and measure reflex angles using a 180° protractor. By separating the angle into a straight line and an acute or obtuse angle. Some students are able to subtract the interior angle from 360°.
Calculating missing angles on a straight line and about a point is also a key skill
Middle ability students begin with applying angle properties, such as angles on a straight line, vertically opposite and angles in a triangle. Little, if any time, is given to measuring or drawing angles. Although I do make sure their knowledge with this is secure.
To challenge the middle ability students I like to introduce proof. A nice way of doing this could be to prove vertically opposite angles are equal and move on to prove angles in a triangle using parallel lines.
More able students start the first lesson by recapping angles on a straight line. Moving on to the proof of vertically opposite angles being equal and why angles about a point have a sum of 360°. We then move on to proving angles in parallel lines and angles in a triangle. Every angle question would involve at least two angle properties and emphasise the need to explain which angle properties apply.
Knowing where the angle properties originate from and being able to prove them is key for this ability. Introducing proof at this point in their mathematics education ensures they are much more likely to prove formulae like the Cosine Rule or Quadratic formula at GCSE.
For further examples of proof with geometry check out my YouTube videos.
When getting ready for a new school year I have a list of priorities to work through. Knowing my team have all the information and resources they need to teach their students gives me confidence we will start the term in the best possible way. Mathematics Teaching and Learning Folder All teachers receive a folder […]
Earlier this week, my school took part in a trial OFSTED inspection as part of getting ready for the new inspection framework in September 2019. This involved three Lead Inspectors visiting our school over the course of two days. The first day involved a ‘deep dive’ by each of the Lead Inspectors into Mathematics, English […]
The method of how to solve quadratics by factorising is now part of the foundational knowledge students aiming for higher exam grades are expected to have. Here is an example of such a question. Solve x2 + 7x – 18 = 0 In my experience of teaching and marking exam papers students often struggle with […]
|
In addition, the radiation process causes the lunar soil, or regolith, to darken over time, which is important in understanding the geologic history of the moon.
The scientists present their findings in a paper published online in the American Geophysical Union's Journal of Geophysical Research (JGR)-Planets. The paper, titled "Lunar Radiation Environment and Space Weathering from the Cosmic Ray Telescope for the Effects of Radiation (CRaTER)," is based on measurements made by the CRaTER instrument onboard NASA's Lunar Reconnaissance Orbiter (LRO) mission. The paper's lead author is Nathan Schwadron, an associate professor of physics at the UNH Space Science Center within the Institute for the Study of Earth, Oceans, and Space (EOS). Co-author Harlan Spence is the director of EOS and lead scientist for the CRaTER instrument.
The telescope provides the fundamental measurements needed to test our understanding of the lunar radiation environment and shows that "space weathering" of the lunar surface by energetic radiation is an important agent for chemical alteration. CRaTER measures material interactions of GCRs and solar energetic particles (SEPs), both of which present formidable hazards for human exploration and spacecraft operations. CRaTER characterizes the global lunar radiation environment and its biological impacts by measuring radiation behind a "human tissue-equivalent" plastic.
Serendipitously, the LRO mission made measurements during a period when GCR fluxes remained at the highest levels ever observed in the space age due to the sun's abnormally extended quiet cycle. During this quiescent period, the diminished power, pressure, flux and magnetic flux of the solar wind allowed GCRs and SEPs to more readily interact with objects they encountered – particularly bodies such as our moon, which has no atmosphere to shield the blow.
"This has provided us with a unique opportunity because we've never made these types of measurements before over an extended period of time, which means we've never been able to validate our models," notes Schwadron. "Now we can put this whole modeling field on more solid footing and project GCR dose rates from the present period back through time when different interplanetary conditions prevailed." This projection will provide a clearer picture of the effects of GCRs on airless bodies through the history of the solar system.
Moreover, CRaTER's recent findings also provide further insight into radiation as a double-edge sword. That is, while cosmic radiation does pose risks to astronauts and even spacecraft, it may have been a fundamental agent of change on celestial bodies by irradiating water ice and causing chemical alterations. Specifically, the process releases oxygen atoms from water ice, which are then free to bind with carbon to form large molecules that are "prebiotic" organic molecules.
In addition to being able to accurately gauge the radiation environment of the past, the now more robust models can also be used more effectively to predict potential radiation hazards spawned by GCRs and SEPs.
Says Schwadron, "Our validated models will be able to answer the question of how hazardous the space environment is and could be during these high-energy radiation events, and the ability to do this is absolutely necessary for any manned space exploration beyond low-Earth orbit."
Indeed, current models were in agreement with radiation dose rates measured by CRaTER, which together demonstrates the accuracy of the Earth-Moon-Mars Radiation Environment Module (EMMREM) being developed at UNH. EMMREM integrates a variety of models describing radiation effects in the Earth-moon-Mars and interplanetary space environments and has now been validated to show its suitability for real-time space weather prediction.
Additional co-authors on the UNH CRaTER team include Thomas Baker, Michael Golightly, Andrew Jordan, Colin Joyce, Sonya Smith, and Jody Wilson. Other co-authors are from the Aerospace Corporation, Harvard-Smithsonian Center for Astrophysics, NASA Goddard Space Flight Center, Boston University, NASA Headquarters, Scientific Data Processing, University of Tennessee, Southwest Research Institute.
The University of New Hampshire, founded in 1866, is a world-class public research university with the feel of a New England liberal arts college. A land, sea, and space-grant university, UNH is the state's flagship public institution, enrolling 12,200 undergraduate and 2,300 graduate students.
Photograph to download: http://crater.unh.edu/graphics/gallery/LRO-7-1_lg.jpg
Artist's illustration of the Lunar Reconnaissance Orbiter. CRaTER is the instrument center-mounted at the bottom of LRO. Illustration by Chris Meaney/NASA.
For more information on the Cosmic Ray Telescope for the Effects of Radiation (CRaTER), visit http://crater.unh.edu.
For more information on EMMREM, visit http://emmrem.unh.edu
Diving robots find Antarctic seas exhale surprising amounts of carbon dioxide in winter
16.08.2018 | National Science Foundation
Diving robots find Antarctic winter seas exhale surprising amounts of carbon dioxide
15.08.2018 | University of Washington
There are currently great hopes for solid-state batteries. They contain no liquid parts that could leak or catch fire. For this reason, they do not require cooling and are considered to be much safer, more reliable, and longer lasting than traditional lithium-ion batteries. Jülich scientists have now introduced a new concept that allows currents up to ten times greater during charging and discharging than previously described in the literature. The improvement was achieved by a “clever” choice of materials with a focus on consistently good compatibility. All components were made from phosphate compounds, which are well matched both chemically and mechanically.
The low current is considered one of the biggest hurdles in the development of solid-state batteries. It is the reason why the batteries take a relatively long...
New design tool automatically creates nanostructure 3D-print templates for user-given colors
Scientists present work at prestigious SIGGRAPH conference
Most of the objects we see are colored by pigments, but using pigments has disadvantages: such colors can fade, industrial pigments are often toxic, and...
Scientists at the University of California, Los Angeles present new research on a curious cosmic phenomenon known as "whistlers" -- very low frequency packets...
Scientists develop first tool to use machine learning methods to compute flow around interactively designable 3D objects. Tool will be presented at this year’s prestigious SIGGRAPH conference.
When engineers or designers want to test the aerodynamic properties of the newly designed shape of a car, airplane, or other object, they would normally model...
Researchers from TU Graz and their industry partners have unveiled a world first: the prototype of a robot-controlled, high-speed combined charging system (CCS) for electric vehicles that enables series charging of cars in various parking positions.
Global demand for electric vehicles is forecast to rise sharply: by 2025, the number of new vehicle registrations is expected to reach 25 million per year....
17.08.2018 | Event News
08.08.2018 | Event News
27.07.2018 | Event News
20.08.2018 | Information Technology
20.08.2018 | Life Sciences
20.08.2018 | Information Technology
|
|Topics and issues|
Net neutrality (also network neutrality, Internet neutrality, or net equality) is the principle that Internet service providers and governments should treat all data on the Internet equally, not discriminating or charging differentially by user, content, site, platform, application, type of attached equipment, or mode of communication. The term was coined by Columbia University media law professor Tim Wu in 2003 as an extension of the longstanding concept of a common carrier.
Examples of net neutrality violations include when the Internet service provider Comcast intentionally slowed peer-to-peer communications. In 2007, one other company was using deep packet inspection to discriminate against peer-to-peer, file transfer protocol, and online games, instituting a cell-phone-style billing system of overages, free-to-telecom value-added services, and bundling.
- 1 Definition and related principles
- 2 By issue
- 3 Legal aspects
- 4 Debate in the United States
- 4.1 FCC ruling
- 4.2 Arguments for net neutrality
- 4.3 Arguments against net neutrality
- 4.3.1 Reduction in innovation and investments
- 4.3.2 Counterweight to server-side non-neutrality
- 4.3.3 Broadband infrastructure
- 4.3.4 Significant and growing competition
- 4.3.5 Broadband choice
- 4.3.6 Deterring competition
- 4.3.7 Potentially increased taxes
- 4.3.8 Prevent overuse of bandwidth
- 4.3.9 High costs to entry for cable broadband
- 4.3.10 Unnecessary regulations
- 5 Related issues
- 6 See also
- 7 References
- 8 External links
Network neutrality is the principle that all Internet traffic should be treated equally. According to Columbia Law School professor Tim Wu, the best way to explain network neutrality is as a principle to be used when designing a network: that a public information network will end up being most useful if all content, sites, and platforms are treated equally. A more detailed proposed definition of technical and service network neutrality suggests that service network neutrality is the adherence to the paradigm that operation of a service at a certain layer is not influenced by any data other than the data interpreted at that layer, and in accordance with the protocol specification for that layer.
The idea of an open Internet is the idea that the full resources of the Internet and means to operate on it are easily accessible to all individuals and companies. This often includes ideas such as net neutrality, open standards, transparency, lack of Internet censorship, and low barriers to entry. The concept of the open Internet is sometimes expressed as an expectation of decentralized technological power, and is seen by some as closely related to open-source software.
Proponents often see net neutrality as an important component of an open Internet, where policies such as equal treatment of data and open web standards allow those on the Internet to easily communicate and conduct business without interference from a third party. A closed Internet refers to the opposite situation, in which established persons, corporations or governments favor certain uses. A closed Internet may have restricted access to necessary web standards, artificially degrade some services, or explicitly filter out content.
The concept of a dumb network made up of dumb pipes has been around since at least the early 1990s. The idea of a dumb network is that the endpoints of a network are generally where the intelligence lies, and that the network itself generally leaves the management and operation of communication to the end users. In 2013 the software company MetroTech Net, Inc. (MTN) coined the term Dumb Wave which is the modern application of the Dumb Pipe concept to the ubiquitous wireless network. If wireless carriers do not provide unique and value-added services, they will be relegated to the dumb pipe category where they can't charge a premium or retain customers.
The end-to-end principle is a principle of network design, first laid out explicitly in the 1981 conference paper End-to-end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark. The principle states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled. According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization; hence, TCP retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached. They argued that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in the intermediate system. They pointed out that most features in the lowest level of a communications system have costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to re-implement the features on an end-to-end basis. This leads to the model of a minimal dumb network with smart terminals, a completely different model from the previous paradigm of the smart network with dumb terminals. Because the end-to-end principle is one of the central design principles of the Internet, and because the practical means for implementing data discrimination violate the end-to-end principle, the principle often enters discussions about net neutrality. The end-to-end principle is closely related, and sometimes seen as a direct precursor to the principle of net neutrality.
Traffic shaping is the control of computer network traffic in order to optimize or guarantee performance, improve latency, and/or increase usable bandwidth by delaying packets that meet certain criteria. More specifically, traffic shaping is any action on a set of packets (often called a stream or a flow) which imposes additional delay on those packets such that they conform to some predetermined constraint (a contract or traffic profile). Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period (bandwidth throttling), or the maximum rate at which the traffic is sent (rate limiting), or more complex criteria such as GCRA.
If the core of a network has more bandwidth than is permitted to enter at the edges, then good QoS can be obtained without policing. For example the telephone network employs admission control to limit user demand on the network core by refusing to create a circuit for the requested connection. Over-provisioning is a form of statistical multiplexing that makes liberal estimates of peak user demand. Over-provisioning is used in private networks such as WebEx and the Internet 2 Abilene Network, an American university network. David Isenberg believes that continued over-provisioning will always provide more capacity for less expense than QoS and deep packet inspection technologies.
Discrimination by protocol
Favoring or blocking information based on the communications protocol that the computers are using to communicate.
On 1 August 2008, the FCC formally voted 3-to-2 to uphold a complaint against Comcast, the largest cable company in the United States, ruling that it had illegally inhibited users of its high-speed Internet service from using file-sharing software. FCC chairman Kevin J. Martin said that the order was meant to set a precedent that Internet providers, and indeed all communications companies, could not prevent customers from using their networks the way they see fit unless there is a good reason. In an interview, Martin said, "We are preserving the open character of the Internet". The legal complaint against Comcast related to BitTorrent, a transfer protocol that is especially apt at distributing large files such as video, music, and software on the Internet. Comcast admitted no wrongdoing in its proposed settlement of up to US$16 dollars per share in December 2009. However, U.S. appeals court ruled in April 2010 that the FCC exceeded its authority when it sanctioned Comcast in 2008 for deliberately preventing some subscribers from using peer-to-peer file-sharing services to download large files. However, the FCC spokeswoman Jen Howard responded, "the court in no way disagreed with the importance of preserving a free and open Internet, nor did it close the door to other methods for achieving this important end." In spite of the ruling in favor of Comcast, a study by Measurement Lab in October 2011 verified that Comcast had virtually stopped its BitTorrent throttling practices.
Discrimination by IP address
During the early decades of the Internet, creating a non-neutral Internet was technically infeasible. Originally developed to filter malware, the Internet security company NetScreen Technologies released network firewalls in 2003 with so called deep packet inspection. Deep inspection helped make real-time discrimination between different kinds of data possible, and is often used for Internet censorship.
In a practice called zero-rating, companies will reimburse data use from certain addresses, favoring use of those services. Examples include Facebook Zero and Google Free Zone, and are especially common in the developing world.
Sometimes ISPs will charge some companies, but not others, for the traffic they cause on the ISP's network. French telecoms operator Orange, complaining that traffic from YouTube and other Google sites consists of roughly 50% of total traffic on the Orange network, reached a deal with Google, in which they charge Google for the traffic incurred on the Orange network. Some also thought that Orange's rival ISP Free throttled YouTube traffic. However, an investigation done by the French telecommunications regulatory body revealed that the network was simply congested during peak hours.
Favoring private networks
There is some disagreement about whether peering is a net neutrality issue.
In the first quarter of 2014, streaming website Netflix reached an arrangement with ISP Comcast to improve the quality of its service to Netflix clients. This arrangement was made in response to increasingly slow connection speeds through Comcast over the course of the 2013, where average speeds dropped by over 25% of their values a year before to an all time low. After the deal was struck in January 2014, the Netflix speed index recorded a 66% increase in connection.
Netflix agreed to a similar deal with Verizon in 2014 after Verizon DSL customers connection speed dropped to less than 1 Mbit/s early in the year. Netflix spoke out against this deal with a controversial statement delivered to all Verizon customers experiencing low connection speeds using the Netflix client. This sparked an internal debate between the two companies that led to Verizon obtaining a cease and desist order on 5 June 2014 that forced Netflix to stop displaying this message.
Legal enforcement of net neutrality principles takes a variety of forms, from provisions that outlaw anti-competitive blocking and throttling of Internet services, all the way to legal enforcement that prevents companies from subsidizing Internet use on particular sites.
Debate in the United States
||It has been suggested that this section be merged into Net neutrality in the United States. (Discuss) Proposed since March 2015.|
There has been extensive debate about whether net neutrality should be required by law in the United States. Debate over the issue of net neutrality predates the coining of the term. Advocates of net neutrality such as Lawrence Lessig have raised concerns about the ability of broadband providers to use their last mile infrastructure to block Internet applications and content (e.g. websites, services, and protocols), and even to block out competitors. On the contrary, opponents claim net neutrality regulations would deter investment into improving broadband infrastructure and try to fix something that isn't broken.
Net neutrality proponents claim that telecom companies seek to impose a tiered service model in order to control the pipeline and thereby remove competition, create artificial scarcity, and oblige subscribers to buy their otherwise noncompetitive services. Many believe net neutrality to be primarily important for the preservation of current internet freedoms; a lack of net neutrality would allow Internet service providers, such as Comcast, to extract payment from content providers like Netflix, and these charges would ultimately be passed on to consumers. Prominent supporters of net neutrality include Vinton Cerf, co-inventor of the Internet Protocol, Tim Berners-Lee, creator of the Web, law professor Tim Wu, Netflix CEO Reed Hastings, Tumblr founder David Karp, and Last Week Tonight host John Oliver. Organizations and companies that support net neutrality include the American Civil Liberties Union, the Electronic Frontier Foundation, Greenpeace, Tumblr, Kickstarter, Vimeo, Wikia, Mozilla Foundation, and others.
Net neutrality opponents such as IBM, Intel, Juniper, Qualcomm, and Cisco claim that net neutrality would deter investment into broadband infrastructure, saying that "shifting to Title II means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don’t know that you can recover on your investment, you won’t make it." Others argue that the regulation is "a solution that won’t work to a problem that simply doesn’t exist". Prominent opponents also include Netscape founder and venture capitalist Marc Andreessen, co-inventor of the Internet Protocol Bob Kahn, PayPal founder and Facebook investor Peter Thiel, MIT Media Lab founder Nicholas Negroponte, Internet engineer and former Chief Technologist for the FCC David Farber, VOIP pioneer Jeff Pulver and Nobel Prize economist Gary Becker. Organizations and companies that oppose net neutrality regulations include several major technology hardware companies, cable and telecommunications companies, hundreds of small internet service providers, various think tanks, several civil rights groups, and others.
Critics of net neutrality argue that data discrimination is desirable for reasons like guaranteeing quality of service. Bob Kahn, co-inventor of the Internet Protocol, called the term net neutrality a slogan and opposes establishing it, but he admits that he is against the fragmentation of the net whenever this becomes excluding to other participants. Vint Cerf, Kahn's co-founder of the Internet Protocol, explains the confusion over their positions on net neutrality, "There’s also some argument that says, well you have to treat every packet the same. That’s not what any of us said. Or you can’t charge more for more usage. We didn’t say that either."
On 31 January 2015, AP News reported the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015. Adoption of this notion would reclassify Internet service from one of information to one of telecommunications and, according to Tom Wheeler, chairman of the FCC, ensure net neutrality. The FCC said that it would not let the public see its 332 page net neutrality plan until after the vote on its implementation as per long established FCC policy. The FCC was expected to enforce net neutrality in its vote, according to the New York Times.
On 26 February 2015, the United States FCC ruled in favor of net neutrality by reclassifying broadband access as a telecommunications service and thus applying Title II (common carrier) of the Communications Act of 1934 to Internet service providers. The FCC Chairman, Tom Wheeler, commented, "This is no more a plan to regulate the Internet than the First Amendment is a plan to regulate free speech. They both stand for the same concept."
Arguments for net neutrality
Proponents of net neutrality include consumer advocates, human rights organizations such as Article 19, online companies and some technology companies. Many major Internet application companies are advocates of neutrality. Yahoo!, Vonage, eBay, Amazon, IAC/InterActiveCorp. Microsoft, Twitter, Tumblr, Etsy, Daily Kos, Greenpeace, along with many other companies and organizations, have also taken a stance in support of net neutrality. Cogent Communications, an international Internet service provider, has made an announcement in favor of certain net neutrality policies. In 2008, Google published a statement speaking out against letting broadband providers abuse their market power to affect access to competing applications or content. They further equated the situation to that of the telephony market, where telephone companies are not allowed to control who their customers call or what those customers are allowed to say. However, Google's support of net neutrality has recently been called into question. Several civil rights groups, such as the ACLU, the Electronic Frontier Foundation, Free Press, and Fight for the Future support net neutrality.
Individuals who support net neutrality include Tim Berners-Lee, Vinton Cerf, Lawrence Lessig, Robert W. McChesney, Steve Wozniak, Susan P. Crawford, Ben Scott, David Reed, and U.S. President Barack Obama. On 10 November 2014, President Obama recommended the FCC reclassify broadband Internet service as a telecommunications service in order to preserve net neutrality. On 12 November 2014, AT&T stopped build-out of their fiber network until it has "solid net neutrality rules to follow". On 31 January 2015, AP News reported the FCC will present the notion of applying ("with some caveats") Title II (common carrier) of the Communications Act of 1934 to the Internet in a vote expected on 26 February 2015.
Control of data
Supporters of network neutrality want to designate cable companies as common carriers, which would require them to allow Internet service providers (ISPs) free access to cable lines, the model used for dial-up Internet. They want to ensure that cable companies cannot screen, interrupt or filter Internet content without court order. Common carrier status would give the FCC the power to enforce net neutrality rules.
SaveTheInternet.com accuses cable and telecommunications companies of wanting the role of gatekeepers, being able to control which websites load quickly, load slowly, or don't load at all. According to SaveTheInternet.com these companies want to charge content providers who require guaranteed speedy data delivery...to create advantages for their own search engines, Internet phone services, and streaming video services – and slowing access or blocking access to those of competitors. Vinton Cerf, a co-inventor of the Internet Protocol and current vice president of Google argues that the Internet was designed without any authorities controlling access to new content or new services. He concludes that the principles responsible for making the Internet such a success would be fundamentally undermined were broadband carriers given the ability to affect what people see and do online.
Digital rights and freedoms
Lawrence Lessig and Robert W. McChesney argue that net neutrality ensures that the Internet remains a free and open technology, fostering democratic communication. Lessig and McChesney go on to argue that the monopolization of the Internet would stifle the diversity of independent news sources and the generation of innovative and novel web content.
User intolerance for slow-loading sites
Proponents of net neutrality invoke the human psychological process of adaptation where when people get used to something better, they would not ever want to go back to something worse. In the context of the Internet, the proponents argue that a user who gets used to the "fast lane" on the Internet would find the "slow lane" intolerable in comparison, greatly disadvantaging any provider who is unable to pay for the "fast lane". Video providers Netflix and Vimeo in their comments to FCC in favor of net neutrality use the research of S.S. Krishnan and Ramesh Sitaraman that provides the first quantitative evidence of adaptation to speed among online video users. Their research studied the patience level of millions of Internet video users who waited for a slow-loading video to start playing. Users who had a faster Internet connectivity, such as fiber-to-the-home, demonstrated less patience and abandoned their videos sooner than similar users with slower Internet connectivity. The results demonstrate how users can get used to faster Internet connectivity, leading to higher expectation of Internet speed, and lower tolerance for any delay that occurs. Author Nicholas Carr and other social commentators have written about the habituation phenomenon by stating that a faster flow of information on the Internet can make people less patient.
Competition and innovation
Net neutrality advocates argue that allowing cable companies the right to demand a toll to guarantee quality or premium delivery would create an exploitative business model based on the ISPs position as gatekeepers. Advocates warn that by charging websites for access, network owners may be able to block competitor Web sites and services, as well as refuse access to those unable to pay. According to Tim Wu, cable companies plan to reserve bandwidth for their own television services, and charge companies a toll for priority service.
Proponents of net neutrality argue that allowing for preferential treatment of Internet traffic, or tiered service, would put newer online companies at a disadvantage and slow innovation in online services. Tim Wu argues that, without network neutrality, the Internet will undergo a transformation from a market ruled by innovation to one ruled by deal-making. SaveTheInternet.com argues that net neutrality puts everyone on equal terms, which helps drive innovation. They claim it is a preservation of the way the internet has always operated, where the quality of websites and services determined whether they succeeded or failed, rather than deals with ISPs. Lawrence Lessig and Robert W. McChesney argue that eliminating net neutrality would lead to the Internet resembling the world of cable TV, so that access to and distribution of content would be managed by a handful of massive companies. These companies would then control what is seen as well as how much it costs to see it. Speedy and secure Internet use for such industries as health care, finance, retailing, and gambling could be subject to large fees charged by these companies. They further explain that a majority of the great innovators in the history of the Internet started with little capital in their garages, inspired by great ideas. This was possible because the protections of net neutrality ensured limited control by owners of the networks, maximal competition in this space, and permitted innovators from outside access to the network. Internet content was guaranteed a free and highly competitive space by the existence of net neutrality.
Preserving Internet standards
Network neutrality advocates have sponsored legislation claiming that authorizing incumbent network providers to override transport and application layer separation on the Internet would signal the decline of fundamental Internet standards and international consensus authority. Further, the legislation asserts that bit-shaping the transport of application data will undermine the transport layer's designed flexibility.
Alok Bhardwaj, founder of Epic Privacy Browser, argues that any violations to network neutrality, realistically speaking, will not involve genuine investment but rather payoffs for unnecessary and dubious services. He believes that it is unlikely that new investment will be made to lay special networks for particular websites to reach end-users faster. Rather, he believes that non-net neutrality will involve leveraging quality of service to extract remuneration from websites that want to avoid being slowed down.
Some advocates say network neutrality is needed in order to maintain the end-to-end principle. According to Lawrence Lessig and Robert W. McChesney, all content must be treated the same and must move at the same speed in order for net neutrality to be true. They say that it is this simple but brilliant end-to-end aspect that has allowed the Internet to act as a powerful force for economic and social good. Under this principle, a neutral network is a dumb network, merely passing packets regardless of the applications they support. This point of view was expressed by David S. Isenberg in his paper, "The Rise of the Stupid Network". He states that the vision of an intelligent network is being replaced by a new network philosophy and architecture in which the network is designed for always-on use, not intermittence and scarcity. Rather than intelligence being designed into the network itself, the intelligence would be pushed out to the end-user's device; and the network would be designed simply to deliver bits without fancy network routing or smart number translation. The data would be in control, telling the network where it should be sent. End-user devices would then be allowed to behave flexibly, as bits would essentially be free and there would be no assumption that the data is of a single data rate or data type.
Contrary to this idea, the research paper titled End-to-end arguments in system design by Saltzer, Reed, and Clark argues that network intelligence doesn't relieve end systems of the requirement to check inbound data for errors and to rate-limit the sender, nor for a wholesale removal of intelligence from the network core.
Arguments against net neutrality
Opponents of net neutrality regulations include AT&T, Verizon, IBM, Intel, Cisco, Nokia, Qualcomm, Broadcom, Juniper, dLink, Wintel, Alcatel-Lucent, Corning, Panasonic, Ericsson, and others. Notable technologists who oppose net neutrality include Marc Andreessen, Scott McNealy, Peter Thiel, David Farber, Nicholas Negroponte, Rajeev Suri, Jeff Pulver, John Perry Barlow, and Bob Kahn.
Nobel Prize economist Gary Becker's paper titled, "Net Neutrality and Consumer Welfare", published by the Journal of Competition Law & Economics, alleges that claims by net neutrality proponents "do not provide a compelling rationale for regulation" because there is "significant and growing competition" among broadband access providers.
Google Chairman Eric Schmidt states that, while Google views that similar data types should not be discriminated against, it is okay to discriminate across different data types—a position that both Google and Verizon generally agree on, according to Schmidt. According to the Journal, when President Barack Obama announced his support for strong net neutrality rules late in 2014, Schmidt told a top White House official the president was making a mistake.
Several civil rights groups, such as the National Urban League, Jesse Jackson's Rainbow/PUSH, and League of United Latin American Citizens, also oppose Title II net neutrality regulations, who said that the call to regulate broadband Internet service as a utility would harm minority communities by stifling investment in underserved areas.
A number of other opponents created Hands Off The Internet, a website created in 2006 to promote arguments against internet regulation. Principal financial support for the website came from AT&T, and members included BellSouth, Alcatel, Cingular, and Citizens Against Government Waste.
Robert Pepper, a senior managing director, global advanced technology policy, at Cisco Systems, and is the former FCC chief of policy development, says: "The supporters of net neutrality regulation believe that more rules are necessary. In their view, without greater regulation, service providers might parcel out bandwidth or services, creating a bifurcated world in which the wealthy enjoy first-class Internet access, while everyone else is left with slow connections and degraded content. That scenario, however, is a false paradigm. Such an all-or-nothing world doesn't exist today, nor will it exist in the future. Without additional regulation, service providers are likely to continue doing what they are doing. They will continue to offer a variety of broadband service plans at a variety of price points to suit every type of consumer". Computer scientist Bob Kahn has said net neutrality is a slogan that would freeze innovation in the core of the Internet.
Farber has written and spoken strongly in favor of continued research and development on core Internet protocols. He joined academic colleagues Michael Katz, Christopher Yoo, and Gerald Faulhaber in an op-ed for the Washington Post strongly critical of network neutrality, essentially stating that while the Internet is in need of remodeling, congressional action aimed at protecting the best parts of the current Internet could interfere with efforts to build a replacement.
Reduction in innovation and investments
According to a letter to key Congressional and FCC leaders sent by 60 major ISP technology suppliers including IBM, Intel, Qualcomm, and Cisco, Title II regulation of the internet "means that instead of billions of broadband investment driving other sectors of the economy forward, any reduction in this spending will stifle growth across the entire economy. This is not idle speculation or fear mongering...Title II is going to lead to a slowdown, if not a hold, in broadband build out, because if you don’t know that you can recover on your investment, you won’t make it."
According to the Wall Street Journal, in one of Google’s few lobbying sessions with FCC officials, the company urged the agency to craft rules that encourage investment in broadband Internet networks—a position that mirrors the argument made by opponents of strong net neutrality rules, such as AT&T and Comcast.
Opponents of net neutrality argue that prioritization of bandwidth is necessary for future innovation on the Internet. Telecommunications providers such as telephone and cable companies, and some technology companies that supply networking gear, argue telecom providers should have the ability to provide preferential treatment in the form of tiered services, for example by giving online companies willing to pay the ability to transfer their data packets faster than other Internet traffic. The added revenue from such services could be used to pay for the building of increased broadband access to more consumers.
Opponents say that net neutrality would make it more difficult for Internet service providers (ISPs) and other network operators to recoup their investments in broadband networks. John Thorne, senior vice president and deputy general counsel of Verizon, a broadband and telecommunications company, has argued that they will have no incentive to make large investments to develop advanced fibre-optic networks if they are prohibited from charging higher preferred access fees to companies that wish to take advantage of the expanded capabilities of such networks. Thorne and other ISPs have accused Google and Skype of freeloading or free riding for using a network of lines and cables the phone company spent billions of dollars to build.
Marc Andreessen states that "a pure net neutrality view is difficult to sustain if you also want to have continued investment in broadband networks. If you’re a large telco right now, you spend on the order of $20 billion a year on capex. You need to know how you’re going to get a return on that investment. If you have these pure net neutrality rules where you can never charge a company like Netflix anything, you’re not ever going to get a return on continued network investment — which means you’ll stop investing in the network. And I would not want to be sitting here 10 or 20 years from now with the same broadband speeds we’re getting today."
Counterweight to server-side non-neutrality
Those in favor of forms of non-neutral tiered Internet access argue that the Internet is already not a level playing field: large companies achieve a performance advantage over smaller competitors by replicating servers and buying high-bandwidth services. Should prices drop for lower levels of access, or access to only certain protocols, for instance, a change of this type would make Internet usage more neutral, with respect to the needs of those individuals and corporations specifically seeking differentiated tiers of service. Network expert Richard Bennett has written, "A richly funded Web site, which delivers data faster than its competitors to the front porches of the Internet service providers, wants it delivered the rest of the way on an equal basis. This system, which Google calls broadband neutrality, actually preserves a more fundamental inequality."
Proponents of net neutrality regulations say network operators have continued to under-invest in infrastructure. However, according to Copenhagen Economics, US investment in telecom infrastructure is 50 percent higher that of the European Union. As a share of GDP, The US's broadband investment rate per GDP trails only the UK and South Korea slightly, but exceeds Japan, Canada, Italy, Germany, and France sizably.
On broadband speed, Akamai reported that the US trails only South Korea and Japan among its major trading partners, and trails only Japan in the G-7 in both average peak connection speed and percentage of the population connection at 10 Mbit/s or higher, but are substantially ahead of most of its other major trading partners.
The White House reported in June 2013 that U.S. connection speeds are "the fastest compared to other countries with either a similar population or land mass."
Akamai's report on "The State of the Internet" in the 2nd quarter of 2014 says "a total of 39 states saw 4K readiness rate more than double over the past year." In other words, as ZDNet reports, those states saw a "major" increase in the availability of the 15Mpbs speed needed for 4K video.
FCC Commissioner Ajit Pai and Federal Election Commission's Lee Goldman wrote in a Politico piece in February 2015, "Compare Europe, which has long had utility-style regulations, with the United States, which has embraced a light-touch regulatory model. Broadband speeds in the United States, both wired and wireless, are significantly faster than those in Europe. Broadband investment in the United States is several multiples that of Europe. And broadband’s reach is much wider in the United States, despite its much lower population density."
Significant and growing competition
A 2010 paper on net neutrality by Nobel Prize economist Gary Becker and his colleagues stated that "there is significant and growing competition among broadband access providers and that few significant competitive problems have been observed to date, suggesting that there is no compelling competitive rationale for such regulation."
Becker and fellow economists Dennis Carlton and Hal Sidler found that "Between mid-2002 and mid-2008, the number of high-speed broadband access lines in the United States grew from 16 million to nearly 133 million, and the number of residential broadband lines grew from 14 million to nearly 80 million. Internet traffic roughly tripled between 2007 and 2009. At the same time, prices for broadband Internet access services have fallen sharply."
The PPI reports that the profit margins of U.S. broadband providers are generally one-sixth to one-eighth of companies that use broadband (such as Apple or Google), contradicting the idea of monopolistic price-gouging by providers.
A report by the Progressive Policy Institute in June 2014 argues that nearly every American can choose from at least 5-6 broadband internet service providers, despite claims that there are only a 'small number' of broadband providers. Citing research from the FCC, the Institute wrote that 90 percent of American households have access to at least one wired and one wireless broadband provider at speeds of at least 4 Mbps (500 kB/s) downstream and 1 Mbit/s (125 kB/s) upstream and that nearly 88 percent of Americans can choose from at least two wired providers of broadband disregarding speed (typically choosing between a cable and telco offering). Further, three of the four national wireless companies report that they offer 4G LTE to between 250-300 million Americans, with the fourth (T-Mobile) sitting at 209 million and counting.
Similarly, the FCC reported in June 2008 that 99.8 percent of zip codes in the United States had two or more providers of high speed Internet lines available, and 94.6 percent of zip codes had four or more providers, as reported by University of Chicago economists Gary Becker, Dennis Carlton, and Hal Sider in a 2010 paper.
When FCC Chairman Tom Wheeler redefined broadband from 4 Mbit/s to 25 Mbit/s (3.125 MB/s) or greater in January 2015, FCC commissioners Ajit Pai and Mike O'Reilly believed the redefinition was to set up the agency's intent to settle the net neutrality fight with new regulations. The commissioners argued that the stricter speed guidelines painted the broadband industry as less competitive, justifying the FCC's moves with Title II net neutrality regulations.
FCC commissioner Ajit Pai states that the FCC completely brushes away the concerns of smaller competitors who are going to be subject to various taxes, such as state property taxes and general receipts taxes. As a result, according to Pai, that does nothing to create more competition within the market.
According to Pai, the FCC's ruling to impose Title II regulations is opposed by the country’s smallest private competitors and many municipal broadband providers. In his dissent, Pai noted that 142 wireless ISPs (WISPs) said that FCC’s new "regulatory intrusion into our businesses...would likely force us to raise prices, delay deployment expansion, or both." He also noted that 24 of the country’s smallest ISPs, each with fewer than 1,000 residential broadband customers, wrote to the FCC stating that Title II "will badly strain our limited resources" because they "have no in-house attorneys and no budget line items for outside counsel." Further, another 43 municipal broadband providers told the FCC that Title II "will trigger consequences beyond the Commission’s control and risk serious harm to our ability to fund and deploy broadband without bringing any concrete benefit for consumers or edge providers that the market is not already proving today without the aid of any additional regulation."
Potentially increased taxes
FCC commissioner Ajit Pai, who opposed the net neutrality ruling, claims that the ruling issued by the FCC to impose Title II regulations explicitly opens the door to billions of dollars in new fees and taxes on broadband by subjecting them to the telephone-style taxes under the Universal Service Fund.
Net neutrality proponent Free Press argues that, "the average potential increase in taxes and fees per household would be far less" than the estimate given by net neutrality opponents, and that if there were to be additional taxes, the tax figure may be around $4 billion. Under favorable circumstances, "the increase would be exactly zero." Meanwhile, the Progressive Policy Institute claims that Title II could trigger taxes and fees up to $11 billion a year. Financial website Nerd Wallet did their own assessment and settled on a possible $6.25 billion tax impact, estimating that the average American household may see their tax bill increase $67 annually.
FCC spokesperson Kim Hart said that the ruling "does not raise taxes or fees. Period." However, the opposing commissioner, Ajit Pai, claims that "the plan explicitly opens the door to billions of dollars in new taxes on broadband...These new taxes will mean higher prices for consumers and more hidden fees that they have to pay." Pai explained that, "One avenue for higher bills is the new taxes and fees that will be applied to broadband. Here’s the background. If you look at your phone bill, you’ll see a 'Universal Service Fee,' or something like it. These fees —- what most Americans would call taxes -- are paid by Americans on their telephone service. They funnel about $9 billion each year through the FCC. Consumers haven’t had to pay these taxes on their broadband bills because broadband has never before been a Title II service. But now it is. And so the Order explicitly opens the door to billions of dollars in new taxes."
Prevent overuse of bandwidth
Since the early 1990s, Internet traffic has increased steadily. The arrival of picture-rich websites and MP3s led to a sharp increase in the mid-1990s followed by a subsequent sharp increase since 2003 as video streaming and Peer-to-peer file sharing became more common. In reaction to companies including YouTube, as well as smaller companies starting to offer free video content, using substantial amounts of bandwidth, at least one Internet service provider (ISP), SBC Communications (now AT&T Inc.), has suggested that it should have the right to charge these companies for making their content available over the provider's network.
Bret Swanson of the Wall Street Journal wrote in 2007 that the popular websites of that time, including YouTube, MySpace, and blogs, were put at risk by net neutrality. He noted that, at the time, YouTube streamed as much data in three months as the world's radio, cable and broadcast television channels did in one year, 75 petabytes. He argued that networks were not remotely prepared to handle the amount of data required to run these sites. He also argued that net neutrality would prevent broadband networks from being built, which would limit available bandwidth and thus endanger innovation.
High costs to entry for cable broadband
According to a Wired article by TechFreedom's Berin Szoka, Matthew Starr, and Jon Henke, local governments and public utilities impose the most significant barriers to entry for more cable broadband competition: "While popular arguments focus on supposed 'monopolists' such as big cable companies, it’s government that’s really to blame." The authors state that local governments and their public utilities charge ISPs far more than they actually cost and have the final say on whether an ISP can build a network. The public officials determine what hoops an ISP must jump through to get approval for access to publicly owned "rights of way" (which lets them place their wires), thus reducing the number of potential competitors who can profitably deploy internet service—such as AT&T’s U-Verse, Google Fiber, and Verizon FiOS. Kickbacks may include municipal requirements for ISPs such as building out service where it isn’t demanded, donating equipment, and delivering free broadband to government buildings.
According to PayPal founder and Facebook investor Peter Thiel, "Net neutrality has not been necessary to date. I don’t see any reason why it’s suddenly become important, when the Internet has functioned quite well for the past 15 years without it...Government attempts to regulate technology have been extraordinarily counterproductive in the past." Max Levchin, the other co-founder of PayPal, echoed similar statements, telling CNBC, "The Internet is not broken, and it got here without government regulation and probably in part because of lack of government regulation."
FCC Commissioner Ajit Pai, who was one of the two commissioners who opposed the net neutrality proposal, criticized the FCC's ruling on internet neutrality, stating that the perceived threats from ISPs to deceive consumers, degrade content, or disfavor the content that they don’t like are non-existent: "The evidence of these continuing threats? There is none; it’s all anecdote, hypothesis, and hysteria. A small ISP in North Carolina allegedly blocked VoIP calls a decade ago. Comcast capped BitTorrent traffic to ease upload congestion eight years ago. Apple introduced Facetime over Wi-Fi first, cellular networks later. Examples this picayune and stale aren’t enough to tell a coherent story about net neutrality. The bogeyman never had it so easy."
FCC Commissioner Mike O'Reilly, the other opposing commissioner, also claims that the ruling is a solution to a hypothetical problem, "Even after enduring three weeks of spin, it is hard for me to believe that the Commission is establishing an entire Title II/net neutrality regime to protect against hypothetical harms. There is not a shred of evidence that any aspect of this structure is necessary. The D.C. Circuit called the prior, scaled-down version a ‘prophylactic’ approach. I call it guilt by imagination."
In a Chicago Tribune article, FCC Commissioner Pai and Joshua Wright of the Federal Trade Commission argue that "the Internet isn't broken, and we don't need the president's plan to 'fix' it. Quite the opposite. The Internet is an unparalleled success story. It is a free, open and thriving platform."
Tim Wu, though a proponent of network neutrality, claims that the current Internet is not neutral as its implementation of best effort generally favors file transfer and other non-time-sensitive traffic over real-time communications. Generally, a network which blocks some nodes or services for the customers of the network would normally be expected to be less useful to the customers than one that did not. Therefore, for a network to remain significantly non-neutral requires either that the customers not be concerned about the particular non-neutralities or the customers not have any meaningful choice of providers, otherwise they would presumably switch to another provider with fewer restrictions.
While the network neutrality debate continues, network providers often enter into peering arrangements among themselves. These agreements often stipulate how certain information flows should be treated. In addition, network providers often implement various policies such as blocking of port 25 to prevent insecure systems from serving as spam relays, or other ports commonly used by decentralized music search applications implementing peer-to-peer networking models. They also present terms of service that often include rules about the use of certain applications as part of their contracts with users.
Most consumer Internet providers implement policies like these. The MIT Mantid Port Blocking Measurement Project is a measurement effort to characterize Internet port blocking and potentially discriminatory practices. However, the effect of peering arrangements among network providers are only local to the peers that enter into the arrangements, and cannot affect traffic flow outside their scope.
Jon Peha from Carnegie Mellon University believes it is important to create policies that protect users from harmful traffic discrimination, while allowing beneficial discrimination. Peha discusses the technologies that enable traffic discrimination, examples of different types of discrimination, and potential impacts of regulation.
Google Chairman Eric Schmidt aligns Google's views on data discrimination with Verizon's: "I want to be clear what we mean by Net neutrality: What we mean is if you have one data type like video, you don't discriminate against one person's video in favor of another. But it's okay to discriminate across different types. So you could prioritize voice over video. And there is general agreement with Verizon and Google on that issue."
Echoing similar comments by Schmidt, Google's Chief Internet Evangelist and "father of the internet", Vint Cerf, says that "it’s entirely possible that some applications needs far more latency, like games. Other applications need broadband streaming capability in order to deliver real-time video. Others don’t really care as long as they can get the bits there, like e-mail or file transfers and things like that. But it should not be the case that the supplier of the access to the network mediates this on a competitive basis, but you may still have different kinds of service depending on what the requirements are for the different applications."
Quality of service
Internet routers forward packets according to the diverse peering and transport agreements that exist between network operators. Many networks using Internet protocols now employ quality of service (QoS), and Network Service Providers frequently enter into Service Level Agreements with each other embracing some sort of QoS.
There is no single, uniform method of interconnecting networks using IP, and not all networks that use IP are part of the Internet. IPTV networks are isolated from the Internet, and are therefore not covered by network neutrality agreements.
The IP datagram includes a 3-bit wide Precedence field and a larger DiffServ Code Point (DSCP) that are used to request a level of service, consistent with the notion that protocols in a layered architecture offer services through Service Access Points. This field is sometimes ignored, especially if it requests a level of service outside the originating network's contract with the receiving network. It is commonly used in private networks, especially those including Wi-Fi networks where priority is enforced. While there are several ways of communicating service levels across Internet connections, such as SIP, RSVP, IEEE 802.11e, and MPLS, the most common scheme combines SIP and DSCP. Router manufacturers now sell routers that have logic enabling them to route traffic for various Classes of Service at "wire-speed".
With the emergence of multimedia, VoIP, IPTV, and other applications that benefit from low latency, various attempts to address the inability of some private networks to limit latency have arisen, including the proposition of offering tiered service levels that would shape Internet transmissions at the network layer based on application type. These efforts are ongoing, and are starting to yield results as wholesale Internet transport providers begin to amend service agreements to include service levels.
Advocates of net neutrality have proposed several methods to implement a net neutral Internet that includes a notion of quality-of-service:
- An approach offered by Tim Berners-Lee allows discrimination between different tiers, while enforcing strict neutrality of data sent at each tier: "If I pay to connect to the Net with a given quality of service, and you pay to connect to the net with the same or higher quality of service, then you and I can communicate across the net, with that quality and quantity of service". "[We] each pay to connect to the Net, but no one can pay for exclusive access to me."
- United States lawmakers have introduced bills that would now allow quality of service discrimination for certain services as long as no special fee is charged for higher-quality service.
Founder of Epic Privacy Browser, Alok Bhardwaj, has argued that net neutrality preservation through legislation is consistent with implementing quality of service protocols. He argues legislation should ban the charging of fees for any quality of service, which would both allow networks to implement quality of service as well as remove any incentive to abuse net neutrality ideas. He argues that since implementing quality of service doesn't require any additional costs versus a non-QoS network, there's no reason implementing quality of service should entail any additional fees. However, the core network hardware needed (with large number of queues, etc.) and the cost of designing and maintaining a QoS network are both much higher than for a non-QoS network.
Broadband Internet access has most often been sold to users based on Excess Information Rate or maximum available bandwidth. If Internet service providers (ISPs) can provide varying levels of service to websites at various prices, this may be a way to manage the costs of unused capacity by selling surplus bandwidth (or "leverage price discrimination to recoup costs of 'consumer surplus'"). However, purchasers of connectivity on the basis of Committed Information Rate or guaranteed bandwidth capacity must expect the capacity they purchase in order to meet their communications requirements.
Various studies have sought to provide network providers the necessary formulas for adequately pricing such a tiered service for their customer base. But while network neutrality is primarily focused on protocol based provisioning, most of the pricing models are based on bandwidth restrictions.
|This article or section may be written in a style that is too abstract to be readily understandable by general audiences.
Please improve it by defining technical terminology, and by adding examples. (February 2015)
Some proponents of net neutrality legislation point to concerns of privacy rights that could come about as a result, how those infringements of privacy can be exploited. While some believe it is hyperbole to suggest that ISPs will just transparently monitor transmitted content, or that ISPs will have to alter their content, there is the concern that ISPs may have profit motives to analyze what their subscribers are viewing, and be able to use such information to their financial advantage. For example, an ISP may be able to essentially replicate the "targeting" that has already been employed by companies like Google. To critics such as David Clark, a senior research scientist at Massachusetts Institute of Technology, the proper question is "who has the right to observe everything you do"?
Framing of debate
- Tim Wu (2003). "Network Neutrality, Broadband Discrimination". Journal on telecom and high tech law. Retrieved 23 Apr 2014.
- Krämer, J; Wiewiorra, L. & Weinhardt,C. (2013): "Net Neutrality: A progress report". Telecommunications Policy 37(9), 794–813.
- Berners-Lee, Tim (21 June 2006). "Net Neutrality: This is serious". timbl's blog. Retrieved 26 December 2008.
- Staff. "A Guide to Net Neutrality for Google Users". Google. Archived from the original on 1 September 2008. Retrieved 7 December 2008.
- Peter Svensson (19 October 2007). "Comcast Blocks some Subscriber Internet Traffic, AP Testing shows". Associated Press. Retrieved 25 October 2009.
- Anderson, Nate (25 July 2007). "Deep packet inspection meets 'Net neutrality, CALEA". Ars Technica. Retrieved 23 June 2011.
- Honan, Matthew (12 February 2008). "Inside Net Neutrality: Is your ISP filtering content?". MacWorld. Retrieved 26 December 2008.
- Wu, Tim. "Network Neutrality FAQ". Retrieved 26 December 2008.
- Hagai Bar-El (19 Aug 2014). "Protecting Network Neutrality: Both Important and Hard". Retrieved 19 Aug 2014.
- Mathew Ingram (23 Mar 2012). "Open vs. closed: What kind of internet do we want?". GigaOm. Retrieved 8 Jun 2014.
- "About the Open Internet". European Commission. Retrieved 23 Apr 2014.
- Alexis C. Madrigal and Adrienne LaFrance (25 Apr 2014). "Net Neutrality: A Guide to (and History of) a Contested Idea". The Atlantic. Retrieved 5 Jun 2014.
This idea of net neutrality...[Lawrence Lessig] used to call the principle e2e, for end to end
- IETF RFC 2475 "An Architecture for Differentiated Services" section 184.108.40.206 – definition of "Shaper"
- tsbmail. "ITU-T I.371 : Traffic control and congestion control in B-ISDN". Retrieved 14 September 2014.
- Isenberg, David (2 July 2007). "Research on Costs of Net Neutrality". Retrieved 26 December 2008.
- Anderson, Nate (25 July 2007). "Deep packet inspection meets 'Net neutrality, CALEA". Ars Technica. Retrieved 26 December 2008.
- Hansell, Saul (2 August 2008). "F.C.C. Vote Sets Precedent on Unfettered Web Usage". The New York Times.
- Duncan, Geoff (23 December 2009). "Comcast to Pay $16 Million for Blocking P2P Applications". Digital Trends. Retrieved 23 December 2009.
- Cheng, Jacqui (22 December 2009). "Comcast settles P2P throttling class-action for $16 million". Ars Technica (Condé Nast). Retrieved 23 December 2009.
- M. Chris Riley and Ben Scott, Free Press (Mar 2009). "Deep Packet Inspection: The end of the Internet as we know it?". Center for Internet and Society. Retrieved 29 May 2014.
- Paul Roberts, IDG News Service (20 Oct 2003). "NetScreen announces deep inspection firewall". Network World. Retrieved 29 May 2014.
- Ben Gilbert (23 Dec 2013). "T-Mobile prepaid offering free data... but only to access Facebook". Engadget. Retrieved 18 Nov 2014.
- Lily Hay Newman (21 Jan 2014). "Net Neutrality Is Already in Trouble in the Developing World". Slate. Retrieved 18 Nov 2014.
- Robertson, Adi (2013-01-19). "French ISP Orange says it's making Google pay to send traffic over its network". The Verge. Retrieved 14 January 2014.
- "ARCEP closes the administrative inquiry involving several companies, including Free and Google, on the technical and financial terms governing IP traffic routing.". 19 July 2013. Retrieved 18 January 2014.
- Brendan Greeley (21 Jun 2012). "Comcast 'Invents' Its Own Private Internet". Bloomberg. Retrieved 18 Nov 2014.
- Joshua Brustein (24 Feb 2014). "Netflix’s Deal With Comcast Isn’t About Net Neutrality—Except That It Is". Bloomberg. Retrieved 18 Nov 2014.
- Waniata, Ryan. "Comcast Jumps up in Netflix Speed Rankings after Payola-style Agreement." Digital Trends. N.p., 14 Apr. 2014. Web. 15 Aug. 2014.
- Waniata, Ryan. "Netflix Calls Verizon out on the Big Red Screen [Update: Netflix Backs Off]." Digital Trends. N.p., 9 June 2014. Web. 15 Aug. 2014.
- Lessig, L. 1999. Cyberspace’s Architectural Constitution, draft 1.1, Text of lecture given at www9, Amsterdam, Netherlands
- "What Is Net Neutrality?". The American Civil Liberties Union. Retrieved 28 February 2015.
- Lawrence Lessig and Robert W. McChesney (8 June 2006). "No Tolls on The Internet". Columns.
- Morran, Chris (24 February 2015). "These 2 Charts From Comcast Show Why Net Neutrality Is Vital". The Consumerist. Retrieved 28 February 2015.
- Davidson, Alan (8 November 2005). "Vint Cerf speaks out on net neutrality". Blogspot.com. Retrieved 25 January 2013.
- "MIT.edu". Dig.csail.mit.edu. 21 June 2006. Retrieved 23 June 2011.
- "Team Internet". Fight for the Future. Retrieved 28 February 2015.
- "Open letter to the Committee on Energy and Commerce" (PDF). 1 March 2006. Retrieved 26 December 2008.
- Mitchell. "A Major Victory for the Open Web". The Mozilla Blog. Mozilla. Retrieved 2 March 2015.
- Robert Kahn and Ed Feigenbaum (9 January 2007). An Evening with Robert Kahn (WMV). Computer History Museum. Retrieved 26 December 2008. Partial transcript: Hu-Berlin.de[dead link]
- Lohr, Steve (2 February 2015). "In Net Neutrality Push, F.C.C. Is Expected to Propose Regulating Internet Service as a Utility". New York Times. Retrieved 2 February 2015.
- Lohr, Steve (2 February 2015). "F.C.C. Chief Wants to Override State Laws Curbing Community Net Services". New York Times. Retrieved 2 February 2015.
- Flaherty, Anne (31 January 2015). "Just whose Internet is it? New federal rules may answer that". AP News. Retrieved 31 January 2015.
- Fung, Brian (2 January 2015). "Get ready: The FCC says it will vote on net neutrality in February". Washington Post. Retrieved 2 January 2015.
- Staff (2 January 2015). "FCC to vote next month on net neutrality rules". AP News. Retrieved 2 January 2015.
- Lohr, Steve (4 February 2015). "F.C.C. Plans Strong Hand to Regulate the Internet". New York Times. Retrieved 5 February 2015.
- Wheeler, Tom (4 February 2015). "FCC Chairman Tom Wheeler: This Is How We Will Ensure Net Neutrality". Wired (magazine). Retrieved 5 February 2015.
- The Editorial Board (6 February 2015). "Courage and Good Sense at the F.C.C. - Net Neutrality's Wise New Rules". New York Times. Retrieved 6 February 2015.
- FCC's Pai: Net-neutrality proposal is secret Internet regulation plan, Los Angelese Times, 10 February 2015
- Weisman, Jonathan (24 February 2015). "As Republicans Concede, F.C.C. Is Expected to Enforce Net Neutrality". New York Times. Retrieved 24 February 2015.
- Lohr, Steve (25 February 2015). "The Push for Net Neutrality Arose From Lack of Choice". New York Times. Retrieved 25 February 2015.
- Staff (26 February 2015). "FCC Adopts Strong, Sustainable Rules To Protect The Open Internet" (PDF). Federal Communications Commission. Retrieved 26 February 2015.
- Ruiz, Rebecca R.; Lohr, Steve (26 February 2015). "In Net Neutrality Victory, F.C.C. Classifies Broadband Internet Service as a Public Utility". New York Times. Retrieved 26 February 2015.
- Flaherty, Anne (25 February 2015). "FACT CHECK: Talking heads skew 'net neutrality' debate". AP News. Retrieved 26 February 2015.
- Fung, Brian (26 February 2015). "The FCC approves strong net neutrality rules". The Washington Post. Retrieved 26 February 2015.
- Yu, Roger and Snider, Mike (26 February 2015). "FCC approves new net neutrality rules". USA Today. Retrieved 26 February 2015.
- Liebelson, Dana (26 February 2015). "Net Neutrality Prevails In Historic FCC Vote". The Huffington Post. Retrieved 27 February 2015.
- Ruiz, Rebecca R. (12 March 2015). "F.C.C. Sets Net Neutrality Rules". New York Times. Retrieved 13 March 2015.
- Sommer, Jeff (12 March 2015). "What the Net Neutrality Rules Say". New York Times. Retrieved 13 March 2015.
- FCC Staff (12 March 2015). "Federal Communications Commission - FCC 15-24 - In the Matter of Protecting and Promoting the Open Internet - GN Docket No. 14-28 - Report and Order on Remand, Declaratory Ruling, and Order" (PDF). Federal Communications Commission. Retrieved 13 March 2015.
- "Four tenors: Call for Internet Speech Rights". ARTICLE 19. Retrieved 31 August 2012.
- Meza, Philip E. (20 March 2007). Coming Attractions?. Stanford University Press. p. 158. ISBN 9780804756600.
- Plunkett, Jack W. (2008). Plunkett's Telecommunications Industry Almanac 2009. Plunkett Research. p. 208. ISBN 9781593921415.
- "Defeat for net neutrality backers". BBC News. 9 June 2006. Retrieved 26 December 2008.
- Cogent Communications, Inc. "Net Neutrality Policy Statement". Retrieved 21 April 2009.
- "Google's Sordid History of Net Neutrality Hypocrisy". Gizmodo. Retrieved 14 September 2014.
- Tim Berners-Lee (18 November 2006). Humanity Lobotomy – what will the Internet look like in 10 years?. Retrieved 26 December 2008.
- Cerf, Vinton (7 February 2006). "The Testimony of Mr. Vinton Cerf, Vice President and Chief Internet Evangelist, Google" (PDF). p. 1. Retrieved 5 November 2012.
- Cerf, Vinton (July 2009). "The Open Network. What it is, and why it matters". Telecommunications Journal of Australia 59 (2). doi:10.2104/tja09018/issn.1835-4270.
- Dynamic Platform Standards Project. "Preserve the Internet Standards for Net Neutrality". Retrieved 26 December 2008.
- Albanesius, Chloe (22 September 2009). "Obama Supports Net Neutrality Plan". PC Magazine. Retrieved 25 January 2013.
- Broache, Anne (29 October 2007). "Obama pledges Net neutrality laws if elected president". CNET. Retrieved 25 January 2013.
- Wyatt, Edward (10 November 2014). "Obama Asks F.C.C. to Adopt Tough Net Neutrality Rules". New York Times. Retrieved 15 November 2014.
- NYT Editorial Board (14 November 2014). "Why the F.C.C. Should Heed President Obama on Internet Regulation". New York Times. Retrieved 15 November 2014.
- Sepulveda, Ambassador Daniel A. (21 January 2015). "The World Is Watching Our Net Neutrality Debate, So Let’s Get It Right". Wired (website). Retrieved 20 January 2015.
- Hardawar, Devindra (12 November 2014). "AT&T halts fiber build-out until net neutrality rules are sorted". www.engadget.com (Reuters). Retrieved 12 November 2014.
- Phillips, Peter (2006). Censored 2007. Seven Stories Press. p. 34. ISBN 9781583227381.
- Robertson, Adi. "Federal court strikes down FCC net neutrality rules". The Verge. Retrieved 14 January 2014.
- "Frequently Asked Questions". SaveTheInternet.com. Archived from the original on 11 December 2008. Retrieved 7 December 2008.
- Davidson, Alan (8 November 2005). "Vint Cerf speaks out on net neutrality". The Official Google Blog. Google.
- "Video Stream Quality Impacts Viewer Behavior, by Krishnan and Sitaraman, ACM Internet Measurement Conference, Nov 2012.".
- "NetFlix comments to FCC, page 17, Sept 16th 2014".
- "Vimeo Open Letter to FCC, page 11, July 15th 2014".
- "Patience is a Network Effect, by Nicholas Carr, Nov 2012".
- "NPR Morning Edition: In Video-Streaming Rat Race, Fast is Never Fast Enough, October 2012". Retrieved 2014-07-03.
- "Boston Globe: Instant gratification is making us perpetually impatient, Feb 2013". Retrieved 2014-07-03.
- "What Is Net Neutrality? 10 Aug 2010".
- Wu, Timothy (1 May 2006). "Why You Should Care About Network Neutrality". Slate.
- Dynamic Platform Standards Project. "Internet Platform for Innovation Act". Sec. 2.11. Retrieved 26 December 2008.
- "Against Fee-Based and other Pernicious Net Prejudice: An Explanation and Examination of the Net Neutrality Debate". Scribd.com. 27 November 2007. Retrieved 23 June 2011.
- Isenberg, David (1 August 1996). "The Rise of the Stupid Network". Retrieved 19 August 2006.
- J. H. Saltzer; D. P. Reed; D. D. Clark (November 1984). "End-to-end arguments in system design". ACM Transactions on Computer Systems 2 (4): 277–288. doi:10.1145/357401.357402.
- Hart, Jonathan D. (2007). Internet Law. BNA Books. p. 750. ISBN 9781570186837.
- Farber, David (2 June 2006). "Common sense about network neutrality". Interesting-People (Mailing list). Retrieved 26 December 2008.
- "Robert Kahn, Forbes". Retrieved 11 November 2011.
- "Hands off the Internet". Archived from the original on 5 January 2009. Retrieved 26 December 2008.
- Jeffrey H. Birnbaum, "No Neutral Ground in This Internet Battle", The Washington Post, 26 July 2006.
- "Hands Off the Internet, "Member Organizations,"". Archived from the original on 5 January 2009. Retrieved 4 August 2006.
- Anne Veigle, "Groups Spent $42 Million on Net Neutrality Ads, Study Finds", Communications Daily, 20 July 2006.
- SaveTheInternet.com, "One Million Americans Urge Senate to Save the Internet", at Savetheinternet.com (last visited 4 August 2006).
- Pepper, Robert (14 March 2007). "Network Neutrality: Avoiding a Net Loss". TechNewsWorld. Retrieved 26 December 2008.
- David Farber; Michael Katz (19 January 2007). "Hold Off On Net Neutrality". The Washington Post. Retrieved 26 December 2008.
- "FTC to Host Workshop on Broadband Connectivity Competition Policy". Federal trade Commission. December 2006.
- Mohammed, Arshad (February 2007). "Verizon Executive Calls for End to Google's 'Free Lunch'". The Washington Post.
- Crowcroft, Jon (2007). Net Neutrality: The Technical Side of the Debate: A White Paper (PDF). University of Cambridge. p. 5. Retrieved 23 June 2009.
- "Former ITIF Staff". ITIF. Retrieved 6 March 2015.
- "Google's political Head-fake". SFGate. 9 July 2008. Retrieved 14 September 2014.
- http://www.itu.int/en/ITU-D/Statistics/Documents/ publications/mis2013/MIS2013_without_Annex_4.pdf
- Wood, Matt (2 December 2014). "Claims That Real Net Neutrality Would Result in New Internet Tax Skew the Math and Confuse the Law". Free Press. Retrieved 28 February 2015.
- "Google and cable firms warn of risks from Web TV". USA Today. 2 July 2007. Retrieved 20 May 2010.
- Kelly, Spencer (15 June 2007). "Warning of 'Internet overload'". BBC Click.
- Banks, Theodore L. (24 May 2002). Corporate Legal Compliance Handbook. Aspen Publishers Online. p. 70. ISBN 9780735533424.
- Swanson, Bret (20 January 2007). "The Coming Exaflood". The Wall Street Journal.
- Wu, Tim (2003). "Network Neutrality, Broadband Discrimination". Journal of Telecommunications and High Technology Law 2: 141. doi:10.2139/ssrn.388863. SSRN 388863.
- Jon Peha. "The Benefits and Risks of Mandating Network Neutrality, and the Quest for a Balanced Policy". Retrieved 1 January 2007.
- Goldman, David (5 August 2010). "Why Google and Verizon's Net neutrality deal affects you". CNNMoney (CNN). Retrieved 6 August 2010.
- Sullivan, Mark (14 August 2006). "Carriers Seek IP QOS Peers". Light Reading. Retrieved 26 December 2008.
- Berners-Lee, Tim (2 May 2006). "Neutrality of the Net". timbl's blog. Retrieved 26 December 2008.
- A bill to amend the Communications Act of 1934 to ensure net neutrality, S. 215
- "NCSU.edu" (PDF). Retrieved 23 June 2011.
- Joch, Alan (October 2009). "Debating Net Neutrality". Communications of the ACM 52 (10): 14–15. doi:10.1145/1562764.1562773.
- Bimbaum, Jeffrey (26 June 2006). "No Neutral Ground In This Battle". The Washington Post. Retrieved 15 December 2006.
- Technological Neutrality and Conceptual Singularity
- Why Consumers Should Be Worried About Net Neutrality
- The FCC on Net Neutrality: Be Careful What You Wish For
- Internet Policy: Who's Pulling the Strings
- Financial backers of pro neutrality groups
- Battle for the Net - Website advocating net neutrality by Fight for the Future
- Don't Break The Net - Website advocating against net neutrality by TechFreedom with monetary support from telcos.
- La Quadrature du Net – Complex dossier and links about net neutrality
- See answer to corresponding question on website's "About TechFreedom" section.
|
Bar Graph Lesson Plan
6th grade math10/09/09
Intern: Laurie ReederMentor: Stuart Allen
Objectives: Students will be able to collect data and create a data table and bar graph from the data. Students will be able to identify and label all the component parts of a bar graph.
Focusing Question: Why is it useful to present data with a bar graph?
Connection/Rationale: This is a continuation of our work on recording and presenting data. “You will be using bar graphs throughout your school career in math, science, social studies, economics, and others. All kinds of professions use bar graphs to present data (show school newsletter) you will run into these throughout your adult life.”
Materials: Student journals, white board, individual packages of Halloween-sized M & M’s, data table templates, graph paper, vocabulary words
Hook: “Today’s lesson will be interesting and delicious! First you must collect the data, then you may eat it!”
I. Start with a directed lesson on bar graphs.
Ask questions: What is data? (pieces of information that often numerical in value)
What is a graph? (A visual way of presenting data) What is a bar graph? (Used to compare categories of data)
Show data table for pets in Mr. Zorg’s class
Draw and label a bar graph on the board (students will do so in their journals)
Include: x, horizontal axis = category, y, vertical axis = frequency, where they meet=0
1. Scale vs. Interval (interval size should allow for highest frequency value to be at least ½ way up graph.
2. Label horizontal and vertical axis
3. Draw bars for each category
4. Specific title for graph
II: Introduce activity: each student will receive a small bag of m & m candy, data table sheet and graph paper.
- Open candy bag and count number of each color of candy and record in the data table
- Make a bar graph using the data from your data table
- Criteria for the graph:
- Axis are labeled
- Categories are labeled
- Scale/interval is appropriate for the graph—high frequency at least ½ way up
- Work is neat…colored pencil only
- Space between bars
- Title is specific
Adaptations/accommodations: Information is presented in several ways—visually, auditorally, and kinesthetically and the student’s taste buds will also be engaged!
Closure: If time allows, students will share data and create a second graph based on their group’s shared data. We will also use the data for a future lesson on line plots.
Assessment: Graphs will show whether the students have adhered to the criteria. Students will be asked to share their findings with the group and then the group will share with the class as a whole.
Extension: Create a line plot for blue candies based on data from the entire class
Data, statistics, probability:
M(DSP)6-1: Interprets graphs
M(DSP)6-3: Organizes and displays data
M(CCR)8-2: Students will create and use representations to communicate mathematical ideas and convert between representations (table to graph)
|
HW: 1) Practice with Measures of Center
2) Survey including: data table, dot plot, mean median & mode as well as bar graph.
Lesson 1: Finding missing data and practice with measures of center.
Lesson 2: Bar graphs and Histograms Creating Histograms
Histogram Practice Reading Histograms
Lesson 3: Vocabulary Games. Crossword Puzzle
HW: Frayer Models(Mean, Median, Mode, Dot Plot) Dot Plot Practice Reading Dot Plots (optional) (Answers)
Lesson 1: Mean, Median, Mode and Dot Plots
Lesson 2: Practice with collecting and interpreting data
Measures of Central Tendency (mean, median & mode)
Practice: Calculating Mean Calculating Median
Practice with Displays: Mean Median
HW: Complete in class task (no HW)
Lesson 1: Introduction to Statistics NBA Totals
Lesson 2: Class activity in data collection
Videos What are Statistics? Why Study Statistics? 200 Years
|
By using certain types of active black holes that lie at the center of many galaxies, researchers have developed a method with the potential to measure distances of billions of light years with a high degree of accuracy.
Radiation emitted in the vicinity of black holes could be used to measure distances of billions of light years, says TAU researcher.
A few years ago, researchers revealed that the universe is expanding at a much faster rate than originally believed — a discovery that earned a Nobel Prize in 2011. But measuring the rate of this acceleration over large distances is still challenging and problematic, says Prof. Hagai Netzer of Tel Aviv University’s School of Physics and Astronomy.
Now, Professor Netzer, along with Jian-Min Wang, Pu Du and Chen Hu of the Institute of High Energy Physics of the Chinese Academy of Sciences and Dr. David Valls-Gabaud of the Observatoire de Paris, has developed a method with the potential to measure distances of billions of light years with a high degree of accuracy. The method uses certain types of active black holes that lie at the center of many galaxies. The ability to measure very long distances translates into seeing further into the past of the universe — and being able to estimate its rate of expansion at a very young age.
Published in the journal Physical Review Letters, this system of measurement takes into account the radiation emitted from the material that surrounds black holes before it is absorbed. As material is drawn into a black hole, it heats up and emits a huge amount of radiation, up to a thousand times the energy produced by a large galaxy containing 100 billion stars. For this reason, it can be seen from very far distances, explains Professor Netzer.
Solving for unknown distances
Using radiation to measure distances is a general method in astronomy, but until now black holes have never been used to help measure these distances. By adding together measurements of the amount of energy being emitted from the vicinity of the black hole to the amount of radiation which reaches Earth, it’s possible to infer the distance to the black hole itself and the time in the history of the universe when the energy was emitted.
Getting an accurate estimate of the radiation being emitted depends on the properties of the black hole. For the specific type of black holes targeted in this work, the amount of radiation emitted as the object draws matter into itself is actually proportional to its mass, say the researchers. Therefore, long-established methods to measure this mass can be used to estimate the amount of radiation involved.
The viability of this theory was proved by using the known properties of black holes in our own astronomical vicinity, “only” several hundred million light years away. Professor Netzer believes that his system will add to the astronomer’s tool kit for measuring distances much farther away, complimenting the existing method which uses the exploding stars called supernovae.
Illuminating “Dark Energy”
According to Professor Netzer, the ability to measure far-off distances has the potential to unravel some of the greatest mysteries of the universe, which is approximately 14 billion years old. “When we are looking into a distance of billions of light years, we are looking that far into the past,” he explains. “The light that I see today was first produced when the universe was much younger.”
One such mystery is the nature of what astronomers call “dark energy,” the most significant source of energy in the present day universe. This energy, which is manifested as some kind of “anti-gravity,” is believed to contribute towards the accelerated expansion of the universe by pushing outwards. The ultimate goal is to understand dark energy on physical grounds, answering questions such as whether this energy has been consistent throughout time and if it is likely to change in the future.
Publication: Jian-Min Wang, et al., “Super-Eddington Accreting Massive Black Holes as Long-Lived Cosmological Standards,” Phys. Rev. Lett. 110, 081301 (2013): DOI:10.1103/PhysRevLett.110.081301
PDF Copy of the Study: Super-Eddington accreting massive black holes as long-lived cosmological standards
Image: American Friends, Tel Aviv University
|
Let's take a look at how to put some of the common programming concepts into practice in your C code. The following is a quick summary of these concepts:
Functions -- As stated earlier, a function is a block of code representing something the computer should do when the program runs. Some languages call these structures methods, though C programmers don't typically use that term. Your program may define several functions and call those functions from other functions. Later, we'll take a closer look at the structure of functions in C.
Variables -- When you run a program, sometimes you need the flexibility to run the program without knowing what the values are ahead of time. Like other programming languages, C allows you to use variables when you need that flexibility. Like variables in algebra, a variable in computer programming is a placeholder that stands for some value that you don't know or haven't found yet.
Data types -- In order to store data in memory while your program is running, and to know what operations you can perform on that data, a programming language like C defines certain data types it will recognize. Each data type in C has a certain size, measured in binary bits or bytes, and a certain set of rules about what its bits represent. Coming up, we'll see how important it is choose the right data type for the task when you're using C.
Operations -- In C, you can perform arithmetic operations (such as addition) on numbers and string operations (such as concatenation) on strings of characters. C also has built-in operations specifically designed for things you might want to do with your data. When we check out data types in C, we'll take a brief look at the operations, too.
Loops -- One of the most basic things a programmer will want to do is repeat an action some number of times based on certain conditions that come up while the program is running. A block of code designed to repeat based on given conditions is called a loop, and the C language provides for these common loop structures: while, do/while, for, continue/break and goto. C also includes the common if/then/else conditionals and switch/case statements.
Data structures -- When your program has a lot of data to handle, and you need to sort or search through that data, you'll probably use some sort of data structure. A data structure is a structured way of representing several pieces of data of the same data type. The most common data structure is an array, which is just an indexed list of a given size. C has libraries available to handle some common data structures, though you can always write functions and set up your own structures, too.
Preprocessor operations -- Sometimes you'll want to give the compiler some instructions on things to do with your code before compiling it into the executable. These operations include substituting constant values and including code from C libraries (which you saw in the sample code earlier).
C also requires programmers to handle some concepts which many programming languages have simplified or automated. These include pointers, memory management, and garbage collection. Later pages cover the important things to know about these concepts when programming in C.
This quick overview of concepts may seem overwhelming if you're not already a programmer. Before you move on to tackle a dense C programming guide, let's take a user-friendly look at the core concepts among those listed above, starting with functions.
|
The use of near-infrared interferometry allowed the team to resolve a ring-shaped dust distribution (generally called "dust torus") in the inner region of the nucleus of the active galaxy NGC 3783. This method is able to achieve an angular resolution equivalent to the resolution of a telescope with a diameter of 130 Meters. The resolved dust torus probably represents the reservoir of gaseous and dusty material that "feeds" the hot gas disk ("accretion disk") and the supermassive black hole in the center of this galaxy.
Artist's view of a dust torus surrounding the accretion disk and the central black hole in active galactic nuclei. Credit: NASA E/PO - Sonoma State University, Aurore Simonnet (http://epo.sonoma.edu/)
The Very Large Telescope Interferometer of the European Southern Observatory. Photo: Gerd Weigelt/MPIfR
Extreme physical processes occur in the innermost regions of galactic nuclei. Supermassive black holes were discovered in many galaxies. The masses of these black holes are often a millionfold larger than the mass of our sun. These central black holes are surrounded by hot and bright gaseous disks, called "accretion disks". The emitted radiation from these accretion disks is probably generated by infalling material. To maintain the high luminosity of the accretion disk, fresh material has to be permanently supplied. The dust tori (see Fig. 1) surrounding the accretion disks are most likely the reservoir of the material that flows through the accretion disk and finally "feeds" the growing black hole.
Observations of these dust tori are very challenging since their sizes are very small. A giant telescope with a mirror diameter of more than 100 Meters would be able to provide the required angular resolution, but unfortunately telescopes of this size will not be available in the near future. This raises the question: Is there an alternative approach that provides the high resolution required?
The solution is to simultaneously combine ("interfere") the light from two or more telescopes since these multi-telescope images, which are called interferograms, contain high-resolution information. In the reported NGC 3783 observations, the AMBER interferometry instrument was used to combine the infrared light from two or three telescopes of ESO's Very Large Telescope Interferometer (VLTI, see Fig. 2). This interferometric method is able to achieve an extreme angular resolution that is proportional to the distance between the telescopes. Since the largest distance between the four telescopes of the VLTI is 130 Meters, an angular resolution is obtained that is as high as the theoretical resolution of a telescope with a mirror diameter of 130 Meters - a resolution that is 15 times higher than the resolution of one of the VLTI telescopes, which have a mirror diameter of 8 Meters.
"The ESO VLTI provides us with a unique opportunity to improve our understanding of active galactic nuclei,", says Gerd Weigelt from the Max-Planck-Institut für Radioastronomie in Bonn. "It allows us to study fascinating physical processes with unprecedented resolution over a wide range of infrared wavelengths. This is needed to derive physical properties of these sources."
And Makoto Kishimoto emphasizes: "We hope to obtain more detailed information in the next few years by additional observations at shorter wavelengths, with longer baselines, and with higher spectral resolution. Most importantly, in a few years, two further interferometric VLTI instruments will be available, which can provide complementary information."
To resolve the nucleus of the active galaxy NGC 3783, the research team recorded thousands of two- and three-telescope interferograms with the VLTI. The telescope distances were in the range of 45 to 114 Meters. The evaluation of these interferograms allowed the team to derive the radius of the compact dust torus in NGC 3783. A very small angular torus radius of 0.74 milli-arcsecond was measured, which corresponds to a radius of 0.52 light years. These near-infrared radius measurements, together with previously obtained mid-infrared measurements, allowed the team to derive important physical parameters of the torus of NGC 3783.
"The high resolution of the VLTI is also important for studying many other types of astrophysical key objects", underlines Karl-Heinz Hofmann. "It is clear that infrared interferometry will revolutionize infrared astronomy in a similar way as radio interferometry has revolutionized radio astronomy."
The research team comprises scientists from the Universities of Florence, Grenoble, Nice, Santa Barbara, and from the MPI für Radioastronomie.
Contact:Prof. Dr. Gerd Weigelt,
Norbert Junkes | Max-Planck-Institut
NASA detects solar flare pulses at Sun and Earth
17.11.2017 | NASA/Goddard Space Flight Center
Pluto's hydrocarbon haze keeps dwarf planet colder than expected
16.11.2017 | University of California - Santa Cruz
The formation of stars in distant galaxies is still largely unexplored. For the first time, astron-omers at the University of Geneva have now been able to closely observe a star system six billion light-years away. In doing so, they are confirming earlier simulations made by the University of Zurich. One special effect is made possible by the multiple reflections of images that run through the cosmos like a snake.
Today, astronomers have a pretty accurate idea of how stars were formed in the recent cosmic past. But do these laws also apply to older galaxies? For around a...
Just because someone is smart and well-motivated doesn't mean he or she can learn the visual skills needed to excel at tasks like matching fingerprints, interpreting medical X-rays, keeping track of aircraft on radar displays or forensic face matching.
That is the implication of a new study which shows for the first time that there is a broad range of differences in people's visual ability and that these...
Computer Tomography (CT) is a standard procedure in hospitals, but so far, the technology has not been suitable for imaging extremely small objects. In PNAS, a team from the Technical University of Munich (TUM) describes a Nano-CT device that creates three-dimensional x-ray images at resolutions up to 100 nanometers. The first test application: Together with colleagues from the University of Kassel and Helmholtz-Zentrum Geesthacht the researchers analyzed the locomotory system of a velvet worm.
During a CT analysis, the object under investigation is x-rayed and a detector measures the respective amount of radiation absorbed from various angles....
The quantum world is fragile; error correction codes are needed to protect the information stored in a quantum object from the deteriorating effects of noise. Quantum physicists in Innsbruck have developed a protocol to pass quantum information between differently encoded building blocks of a future quantum computer, such as processors and memories. Scientists may use this protocol in the future to build a data bus for quantum computers. The researchers have published their work in the journal Nature Communications.
Future quantum computers will be able to solve problems where conventional computers fail today. We are still far away from any large-scale implementation,...
Pillared graphene would transfer heat better if the theoretical material had a few asymmetric junctions that caused wrinkles, according to Rice University...
15.11.2017 | Event News
15.11.2017 | Event News
30.10.2017 | Event News
17.11.2017 | Physics and Astronomy
17.11.2017 | Health and Medicine
17.11.2017 | Studies and Analyses
|
The sulfate or sulphate (see spelling differences) ion is a polyatomic anion with the empirical formula SO2−
4. Sulfate is the spelling recommended by IUPAC, but sulphate is used in British English. Salts, acid derivatives, and peroxides of sulfate are widely used in industry. Sulfates occur widely in everyday life. Sulfates are salts of sulfuric acid and many are prepared from that acid.
3D model (JSmol)
CompTox Dashboard (EPA)
|Molar mass||96.06 g·mol−1|
|Conjugate acid||Hydrogen sulfate|
The sulfate anion consists of a central sulfur atom surrounded by four equivalent oxygen atoms in a tetrahedral arrangement. The symmetry is the same as that of methane. The sulfur atom is in the +6 oxidation state while the four oxygen atoms are each in the −2 state. The sulfate ion carries an overall charge of −2 and it is the conjugate base of the bisulfate (or hydrogen sulfate) ion, HSO−
4, which is in turn the conjugate base of H
4, sulfuric acid. Organic sulfate esters, such as dimethyl sulfate, are covalent compounds and esters of sulfuric acid. The tetrahedral molecular geometry of the sulfate ion is as predicted by VSEPR theory.
The first description of the bonding in modern terms was by Gilbert Lewis in his groundbreaking paper of 1916 where he described the bonding in terms of electron octets around each atom, that is no double bonds and a formal charge of +2 on the sulfur atom.
Later, Linus Pauling used valence bond theory to propose that the most significant resonance canonicals had two pi bonds involving d orbitals. His reasoning was that the charge on sulfur was thus reduced, in accordance with his principle of electroneutrality. The S−O bond length of 149 pm is shorter than the bond lengths in sulfuric acid of 157 pm for S−OH. The double bonding was taken by Pauling to account for the shortness of the S−O bond. Pauling's use of d orbitals provoked a debate on the relative importance of π bonding and bond polarity (electrostatic attraction) in causing the shortening of the S−O bond. The outcome was a broad consensus that d orbitals play a role, but are not as significant as Pauling had believed.
A widely accepted description involving pπ – dπ bonding was initially proposed by D. W. J. Cruickshank. In this model, fully occupied p orbitals on oxygen overlap with empty sulfur d orbitals (principally the dz2 and dx2–y2). However, in this description, despite there being some π character to the S−O bonds, the bond has significant ionic character. For sulfuric acid, computational analysis (with natural bond orbitals) confirms a clear positive charge on sulfur (theoretically +2.45) and a low 3d occupancy. Therefore, the representation with four single bonds is the optimal Lewis structure rather than the one with two double bonds (thus the Lewis model, not the Pauling model). In this model, the structure obeys the octet rule and the charge distribution is in agreement with the electronegativity of the atoms. The discrepancy between the S−O bond length in the sulfate ion and the S−OH bond length in sulfuric acid is explained by donation of p-orbital electrons from the terminal S=O bonds in sulfuric acid into the antibonding S−OH orbitals, weakening them resulting in the longer bond length of the latter.
However, the bonding representation of Pauling for sulfate and other main group compounds with oxygen is still a common way of representing the bonding in many textbooks. The apparent contradiction can be cleared if one realizes that the covalent double bonds in the Lewis structure in reality represent bonds that are strongly polarized by more than 90% towards the oxygen atom. On the other hand, in the structure with a dipolar bond, the charge is localized as a lone pair on the oxygen.
- treating metal, metal hydroxide or metal oxide with sulfuric acid
- Zn + H2SO4 → ZnSO4 + H2
- Cu(OH)2 + H2SO4 → CuSO4 + 2 H2O
- CdCO3 + H2SO4 → CdSO4 + H2O + CO2
Many examples of ionic sulfates are known, and many of these are highly soluble in water. Exceptions include calcium sulfate, strontium sulfate, lead(II) sulfate, and barium sulfate, which are poorly soluble. Radium sulfate is the most insoluble sulfate known. The barium derivative is useful in the gravimetric analysis of sulfate: if one adds a solution of, perhaps, barium chloride to a solution containing sulfate ions, the appearance of a white precipitate, which is barium sulfate, indicates that sulfate anions are present.
The sulfate ion can act as a ligand attaching either by one oxygen (monodentate) or by two oxygens as either a chelate or a bridge. An example is the complex [Co(en)2(SO4)]+Br− or the neutral metal complex PtSO4(P(C6H5)3)2 where the sulfate ion is acting as a bidentate ligand. The metal–oxygen bonds in sulfate complexes can have significant covalent character.
Uses and occurrence
Sulfates are widely used industrially. Major compounds include:
- Gypsum, the natural mineral form of hydrated calcium sulfate, is used to produce plaster. About 100 million tonnes per year are used by the construction industry.
- Copper sulfate, a common algaecide, the more stable form (CuSO4) is used for galvanic cells as electrolyte
- Iron(II) sulfate, a common form of iron in mineral supplements for humans, animals, and soil for plants
- Magnesium sulfate (commonly known as Epsom salts), used in therapeutic baths
- Lead(II) sulfate, produced on both plates during the discharge of a lead–acid battery
- Sodium Laureth Sulfate, or SLES, a common detergent in shampoo formulations
- Polyhalite, hydrated K2Ca2Mg-sulfate, used as fertiliser.
Occurrence in nature
Sulfate-reducing bacteria, some anaerobic microorganisms, such as those living in sediment or near deep sea thermal vents, use the reduction of sulfates coupled with the oxidation of organic compounds or hydrogen as an energy source for chemosynthesis.
Some sulfates were known to alchemists. The vitriol salts, from the Latin vitreolum, glassy, were so-called because they were some of the first transparent crystals known. Green vitriol is iron(II) sulfate heptahydrate, FeSO4·7H2O; blue vitriol is copper(II) sulfate pentahydrate, CuSO4·5H2O and white vitriol is zinc sulfate heptahydrate, ZnSO4·7H2O. Alum, a double sulfate of potassium and aluminium with the formula K2Al2(SO4)4·24H2O, figured in the development of the chemical industry.
Sulfates occur as microscopic particles (aerosols) resulting from fossil fuel and biomass combustion. They increase the acidity of the atmosphere and form acid rain. The anaerobic sulfate-reducing bacteria Desulfovibrio desulfuricans and D. vulgaris can remove the black sulfate crust that often tarnishes buildings.
Main effects on climate
The main direct effect of sulfates on the climate involves the scattering of light, effectively increasing the Earth's albedo. This effect is moderately well understood and leads to a cooling from the negative radiative forcing of about 0.4 W/m2 relative to pre-industrial values, partially offsetting the larger (about 2.4 W/m2) warming effect of greenhouse gases. The effect is strongly spatially non-uniform, being largest downstream of large industrial areas.
The first indirect effect is also known as the Twomey effect. Sulfate aerosols can act as cloud condensation nuclei and this leads to greater numbers of smaller droplets of water. Lots of smaller droplets can diffuse light more efficiently than just a few larger droplets. The second indirect effect is the further knock-on effects of having more cloud condensation nuclei. It is proposed that these include the suppression of drizzle, increased cloud height, to facilitate cloud formation at low humidities and longer cloud lifetime. Sulfate may also result in changes in the particle size distribution, which can affect the clouds radiative properties in ways that are not fully understood. Chemical effects such as the dissolution of soluble gases and slightly soluble substances, surface tension depression by organic substances and accommodation coefficient changes are also included in the second indirect effect.
The indirect effects probably have a cooling effect, perhaps up to 2 W/m2, although the uncertainty is very large. Sulfates are therefore implicated in global dimming. Sulfate is also the major contributor to stratospheric aerosol formed by oxidation of sulfur dioxide injected into the stratosphere by impulsive volcanoes such as the 1991 eruption of Mount Pinatubo in the Philippines. This aerosol exerts a cooling effect on climate during its 1-2 year lifetime in the stratosphere.
Hydrogen sulfate (bisulfate)
3D model (JSmol)
CompTox Dashboard (EPA)
|Molar mass||97.071 g/mol|
|Melting point||270.47 °C (518.85 °F; 543.62 K)|
|Boiling point||623.89 °C (1,155.00 °F; 897.04 K)|
|Vapor pressure||0.00791 Pa (5.93E-005 mm Hg)|
|Conjugate acid||Sulfuric acid|
The conjugate base of sulfuric acid (H2SO4)—a dense, colourless, oily, corrosive liquid—is the hydrogen sulfate ion (HSO−
4), also called the bisulfate ion. Sulfuric acid is classified as a strong acid; in aqueous solutions it ionizes completely to form hydronium ions (H3O+) and hydrogen sulfate (HSO−
4). In other words, the sulfuric acid behaves as a Brønsted–Lowry acid and is deprotonated. Bisulfate has a molar mass of 97.078 g/mol. It has a valency of 1. An example of a salt containing the HSO−
4 group is sodium bisulfate, NaHSO4. In dilute solutions the hydrogen sulfate ions also dissociate, forming more hydronium ions and sulfate ions (SO2−
4). The CAS registry number for hydrogen sulfate is 14996-02-2.
Other sulfur oxyanions
- Lewis assigned to sulfur a negative charge of two, starting from six own valence electrons and ending up with eight electrons shared with the oxygen atoms. In fact, sulfur donates two electrons to the oxygen atoms.
- The prefix "bi" in "bisulfate" comes from an outdated naming system and is based on the observation that there is twice as much sulfate (SO2−
4) in sodium bisulfate (NaHSO4) and other bisulfates as in sodium sulfate (Na2SO4) and other sulfates. See also bicarbonate.
|Wikimedia Commons has media related to sulfates.|
- Lewis, Gilbert N. (1916). "The Atom and the Molecule". J. Am. Chem. Soc. 38: 762–785. doi:10.1021/ja02261a002. (See page 778.)
- Pauling, Linus (1948). "The modern theory of valency". J. Chem. Soc.: 1461–1467. doi:10.1039/JR9480001461.
- Coulson, C. A. (1969). "d Electrons and Molecular Bonding". Nature. 221: 1106. Bibcode:1969Natur.221.1106C. doi:10.1038/2211106a0.
- Mitchell, K. A. R. (1969). "Use of outer d orbitals in bonding". Chem. Rev. 69: 157. doi:10.1021/cr60258a001.
- Cotton, F. Albert; Wilkinson, Geoffrey (1966). Advanced Inorganic Chemistry (2nd ed.). New York, NY: Wiley.
- Stefan, Thorsten; Janoschek, Rudolf (Feb 2000). "How relevant are S=O and P=O Double Bonds for the Description of the Acid Molecules H2SO3, H2SO4, and H3PO4, respectively?". J. Mol. Modeling. 6 (2): 282–288. doi:10.1007/PL00010730.
- Greenwood, Norman N.; Earnshaw, Alan (1997). Chemistry of the Elements (2nd ed.). Butterworth-Heinemann. ISBN 978-0-08-037941-8.
- Taylor, F. Sherwood (1942). Inorganic and Theoretical Chemistry (6th ed.). William Heinemann.
- Andrea Rinaldi (Nov 2006). "Saving a fragile legacy. Biotechnology and microbiology are increasingly used to preserve and restore the worlds cultural heritage". EMBO Reports. 7 (11): 1075–1079. doi:10.1038/sj.embor.7400844. PMC 1679785. PMID 17077862.
- Intergovernmental Panel on Climate Change (2007). "Chapter 2: Changes in Atmospheric Constituents and Radiative Forcing". Working Group I: The Scientific Basis.
- Current sulfate distribution in the atmosphere (Map).
- Pincus & Baker 1994
- Albrecht 1989
- Rissman, T. A.; Nenes, A.; Seinfeld, J. H. "Chemical Amplification (or dampening) of the Twomey Effect: Conditions derived from droplet activation theory" (PDF). Cite journal requires
- Archer, David. Understanding the Forecast. p. 77. Figure 10.2
|
- 1 How do you make a dot plot?
- 2 What is a dot plot and how do you read it?
- 3 What is the first step in making a dot plot?
- 4 How do you scale a dot plot?
- 5 What does a dot plot tell you?
- 6 What is mode on a dot plot?
- 7 Which dot plot has more than one mode?
- 8 How do you find the standard error in a dot plot?
- 9 What is mean absolute deviation on a dot plot?
- 10 What does a small mad tell you about a set of data?
- 11 What is the mean and mad of this data set?
- 12 How do you find mean?
- 13 What are the steps to find the mean absolute deviation?
- 14 How do you find mean deviation in statistics?
- 15 What is the mode formula?
- 16 How do you find the deviation from the mean?
- 17 What is difference between mean deviation and standard deviation?
- 18 How do you interpret standard deviation?
- 19 How do you compare mean and standard deviation?
How do you make a dot plot?
What is a dot plot and how do you read it?
A dot plot is a simple plot that displays data values as dots above a number. line. Dot plots show the frequency with which a specific item appears in a data set. Dot plots show the distribution of the data. Students spent 1 to 6 hours on homework.
What is the first step in making a dot plot?
To begin your basic Dot Plot, draw a line long enough to hold all of your data. Label the plot. Labeling your plot will need to be done on the bottom, under the line you drew. Choosing whether to use Numbers or Words will depend on what your data consists of.
How do you scale a dot plot?
To make a dot plot of the pulse rates, first draw a number line with the minimum value, 56, at the left end. Select a scale and label equal intervals until you reach the maximum value, 92. For each value in the data set, put a dot above that value on the number line. When a value occurs more than once, stack the dots.
What does a dot plot tell you?
A Dot Plot, also called a dot chart or strip plot, is a type of simple histogram-like chart used in statistics for relatively small data sets where values fall into a number of discrete bins (categories). With the Dot Plot, it indicates that all of you have chosen pizza.
What is mode on a dot plot?
to identify the center of a data set. The. mean is the average value in the data set. The median is the data value that, when listed in order from least to greatest, is the middle value in the data set. The mode, the number that appears the most often, also describes the central tendency of a data set.
Which dot plot has more than one mode?
Answer: The correct option is 1. Calico Crayfish dot plot has two modes.
How do you find the standard error in a dot plot?
The Standard Error is more properly called Standard Error of the Mean; it is stdev/(N-1) where N is the number of data points. If your data follow nice Gaussian statistics, it is an estimate of how well you know the “true” mean value of the supposed underlying distribution from which your data were drawn.
What is mean absolute deviation on a dot plot?
The mean absolute deviation is the average of the differences (deviations) of each value in the data set from the mean of the data set. Graphically, the deviations can be represented on a number line from a dot plot.
What does a small mad tell you about a set of data?
A small MAD (Mean Absolute Deviation) tell us about a set of data is that the variability is lesser and the data set is denser towards mean. Explanation: The variance and Mean absolute deviation tells us about the variability in the data set value. If MAD is higher means that the data set are close to each other.
What is the mean and mad of this data set?
Mean absolute deviation (MAD) of a data set is the average distance between each data value and the mean. Mean absolute deviation is a way to describe variation in a data set. Mean absolute deviation helps us get a sense of how “spread out” the values in a data set are.
How do you find mean?
The mean is the average of the numbers. It is easy to calculate: add up all the numbers, then divide by how many numbers there are. In other words it is the sum divided by the count.
What are the steps to find the mean absolute deviation?
Step 1: Calculate the mean. Step 2: Calculate how far away each data point is from the mean using positive distances. These are called absolute deviations. Step 3: Add those deviations together.
How do you find mean deviation in statistics?
In three steps:
- Find the mean of all values.
- Find the distance of each value from that mean (subtract the mean from each value, ignore minus signs)
- Then find the mean of those distances.
What is the mode formula?
The Formula for Mode is:- Mode = L + (fm−f1)h /(fm−f1)+(fm−f2) Mode Formula for Grouped Data: Mode = L + (fm−f1)h /2fm−f1−f2.
How do you find the deviation from the mean?
To calculate the standard deviation of those numbers:
- Work out the Mean (the simple average of the numbers)
- Then for each number: subtract the Mean and square the result.
- Then work out the mean of those squared differences.
- Take the square root of that and we are done!
What is difference between mean deviation and standard deviation?
Standard deviation is basically used for the variability of data and frequently use to know the volatility of the stock. A mean is basically the average of a set of two or more numbers. Mean is basically the simple average of data. Standard deviation is used to measure the volatility of a stock.
How do you interpret standard deviation?
A low standard deviation indicates that the data points tend to be very close to the mean; a high standard deviation indicates that the data points are spread out over a large range of values.
How do you compare mean and standard deviation?
Standard deviation is an important measure of spread or dispersion. It tells us how far, on average the results are from the mean. Therefore if the standard deviation is small, then this tells us that the results are close to the mean, whereas if the standard deviation is large, then the results are more spread out.
|
Pythagorean Theorem Worksheets Free and Printable
These Pythagorean Theorem worksheets are downloadable, printable, and come with corresponding printable answer pages. In mathematics, the Pythagorean theorem is a relation in Euclidean geometry among the three sides of a right triangle. It states that the square of the hypotenuse (the side opposite the right angle) is equal to the sum of the squares of the other two sides. Test your knowledge with these Pythagorean Theorem Worksheets.
Pythagorean Theorem PDF – A
Pythagorean Theorem PDF – A Answers
Pythagorean Theorem PDF – B
Pythagorean Theorem PDF – B Answers
Pythagorean Theorem PDF – C
Pythagorean Theorem PDF – C Answers
Pythagorean Theorem PDF – D
Pythagorean Theorem PDF – D Answers
Pythagorean Theorem PDF – E
Pythagorean Theorem PDF – E Answers
Pythagorean Theorem PDF – F
Pythagorean Theorem PDF – F Answers
Pythagorean Theorem PDF – G
Pythagorean Theorem PDF – G Answers
Pythagorean Theorem PDF – H
Pythagorean Theorem PDF – H Answers
Pythagorean Theorem PDF – I
Pythagorean Theorem PDF – I Answers
Pythagorean Theorem PDF – J
Pythagorean Theorem PDF – J Answers
The Pythagorean Theorem is a really important idea in math, and it’s named after a Greek mathematician named Pythagoras. It’s all about right triangles, which are triangles with one angle that’s 90 degrees, like the corner of a book.
Here’s what the Pythagorean Theorem says in a simple way:
“In a right triangle, the square of the length of the longest side, called the hypotenuse, is equal to the sum of the squares of the lengths of the other two sides.”
Now, let’s break that down a bit:
- Right Triangle: This is a special kind of triangle with one 90-degree angle. The other two angles are smaller, and we call them acute angles.
- Hypotenuse: This is the longest side of the right triangle, and it’s the one opposite to the 90-degree angle.
- Squares: To find the square of a number, you multiply it by itself. So, if you have a number like 3, its square would be 3 x 3, which is 9.
- Sum: This means adding things together. So when we say “the sum of the squares,” we’re talking about adding up the squares of the two shorter sides.
In simple terms, the Pythagorean Theorem helps us figure out the length of one side of a right triangle if we know the lengths of the other two sides. It’s like a secret math tool to solve triangle puzzles! Here’s the formula in math language:
c² = a² + b²
- “c” is the length of the hypotenuse.
- “a” and “b” are the lengths of the other two sides.
So, if you know the lengths of two sides of a right triangle, you can use this formula to find the length of the third side. It’s a cool trick that helps us understand and work with triangles in math!
Perimeter of Triangles PDFs
Volume of Cubes PDFs
Volume PDFs – Rectangular Prisms, Cones, Spheres, Cylinders, Triangular Prisms
Identifying Polygons PDFs
Surface Area PDFs
Measuring Lines PDFs
CogAT (Cognitive Abilities Test) – Aptitude test often given as an entrance exam into schools’ gifted programs.
CogAT Gifted Test Overview and FREE Sample Questions
CogAT Gifted Test Flash Cards
The SCAT (School and College Ability Test)
SCAT Gifted Test Overview and FREE Sample Questions
SCAT Gifted Test Flash Cards
MindPrint Cognitive Assessment (Ages 8 to 18) – Discover a Student’s Strengths. Learn more.
Get a Test Overview and 100 FREE Practice Questions for the following GATE Tests!
AABL® Bracken School Readiness Assessment™ (BSRA™) CCAT™ CogAT®
California Gifted and Talented Education (GATE) CTY (Center for Talented Youth) Program
Chicago Area Gifted Programs CTP®-ERB Fairfax County AAP
Gifted and Talented Test Houston Vanguard Test InView™ ISEE® NNAT®
HCHS (Hunter College High School®) OLSAT® Ravens Progressive Matrices™
Iowa Assessments® (ITBS®) KABC™-II KBIT™-2 MAP® SCAT®
Los Angeles Unified School District GATE Program RIAS™ SAGES-2™
New York State (NYS) Assessments NYC Gifted Test Renaissance STAR®
SHSAT STB® Stanford Binet®-V Thinking and Engagement Assessment
TerraNova® STAAR Test Torrance® (TTCT®) WASI™ Woodcock-Johnson®
Wechsler Individual Achievement Test® (WIAT) WISC® WPPSI™
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.