text stringlengths 4 602k |
|---|
Python Certification Training for Data Scienc ...
- 50k Enrolled Learners
- Live Class
Python programming language has picked up pace in the last decade. The increasing popularity has brought a lot of demand for python developers in domains like machine learning, data science, etc. One of the main reasons for this growth has been the out of the box features that python comes with. One such function is map function in python, which optimizes the execution of a function with multiple arguments. In this article, we will discuss the map function in detail. The following topics are discussed in this blog.
A map function provides a function for which each item in an iterable can be passed as a parameter. For instance, let’s say we have a function that calculates the length of a string. Using the map function we can specify this function with a list containing a bunch of strings. The output will have the length of each item in the list.
Following is a simple program using the map function to calculate the length of a string in a list.
def func(x): return len(x) a = [ 'Sunday' , 'Monday' , 'Tuesday' , 'Wednesday' , 'Thursday' , 'Friday' , 'Saturday' ] b = map( func , a ) print(list(b))
Output: [ 6 , 6 , 7 , 9 , 8 , 6 , 8 ]
Function – It is a mandatory parameter that stores the function that will be executed using the map function.
Iterable – It stores the iterable which will be passed as an argument in the function. It is a mandatory parameter as well.
res = map(function , iterable)
def add(a , b): return a + b x = [1,3,5,7,9] y = [2,4,6,8,10] res = map(add , x , y) print(list(res))
Output: [ 3 , 7 , 11 , 15 , 19]
def cube(n): return n*n*n a = list(range(1,11)) res = map(cube , a) print(list(res))
Output: [1, 8, 27, 64, 125, 216, 343, 512, 729, 1000]
a = list(range(1,10)) res = map(lambda x : x*x , a) print(list(res))
Output: [1, 4, 9, 16, 25, 36, 49, 64, 81, 100]
In this article, we have learned about how we can use map function in python with various examples. By looking at the examples, one can imagine how tidy and readable the code is in the python programming language. Readability and easy syntax are one of the many reasons why python has become so popular in the last decade. With the increasing popularity, the demand has also increased in domains like machine learning, artificial intelligence, data science, etc. To master your skills enroll in edureka’s python certification program and kick-start your learning.
Have any questions? Mention them in the comments. We will get back to you as soon as possible. |
Quantum computing is a type of computing that uses quantum bits or "qubits" instead of the traditional bits used in classical computing. Unlike classical bits, which can only be in a state of either 0 or 1 at any given time, qubits can exist in multiple states simultaneously, a property known as superposition.
This ability to exist in multiple states at once is what gives quantum computers their power. It allows them to perform certain calculations much faster than classical computers. Additionally, qubits can also become "entangled" with each other, which means that the state of one qubit can be used to determine the state of another, no matter how far apart they are.
Quantum computing is still a relatively new field, but it has the potential to revolutionize the way we process information and solve complex problems. It could have a significant impact on fields such as cryptography, drug discovery, and artificial intelligence, among others.
Quantum computing relies on the principles of quantum mechanics, which govern the behavior of matter and energy at the atomic and subatomic level. In classical computing, information is represented by bits, which are either in a state of 0 or 1. However, in quantum computing, information is represented by qubits, which can exist in a superposition of both 0 and 1 states at the same time.
This means that a quantum computer can perform many calculations simultaneously, which can drastically reduce the time needed to solve certain types of problems. For example, a quantum computer could solve complex mathematical equations much faster than a classical computer.
Another key feature of quantum computing is entanglement. This is a phenomenon in which two or more qubits become linked in such a way that the state of one qubit is dependent on the state of the others, no matter how far apart they are. This allows quantum computers to perform certain operations much faster than classical computers.
Quantum computing is still in the early stages of development, and there are many technical challenges that must be overcome before it can be widely adopted. However, there is great excitement about its potential to solve some of the world's most complex problems, such as simulating molecular interactions to aid drug discovery or improving machine learning algorithms for artificial intelligence.
Quantum computing has the potential to significantly impact cryptography. Many modern cryptographic protocols rely on the difficulty of factoring large numbers, a problem that can be solved much faster with quantum computers than classical computers. As such, there is a need to develop new cryptographic protocols that can resist attacks from quantum computers.
Quantum computing may also have applications in optimization problems, which involve finding the best solution out of a large number of possible solutions. For example, it could be used to optimize supply chain logistics or improve financial portfolio management.
However, building a practical quantum computer is a major engineering challenge. Qubits are very fragile and can be easily affected by their environment, making it difficult to maintain their quantum state long enough to perform calculations. This is known as the problem of quantum decoherence. Additionally, scaling up quantum computers to a large number of qubits is also a significant challenge.
Despite these challenges, there has been significant progress in the field of quantum computing in recent years, and many tech companies and research institutions are investing heavily in its development. It will be exciting to see how this technology develops and the impact it could have on the world.
One of the biggest advantages of quantum computing is its potential to solve problems that are practically intractable for classical computers. This includes problems in areas such as cryptography, optimization, and simulation. For example, a quantum computer could be used to break the RSA encryption algorithm used in many secure communication systems, or to simulate the behavior of complex molecules to aid drug discovery.
Quantum computing can also potentially accelerate machine learning and artificial intelligence (AI) algorithms. This is because many AI algorithms involve optimization problems that can be solved much faster on a quantum computer than on a classical computer. Additionally, quantum computers can be used to generate large amounts of random numbers, which are important in many machine learning applications.
There are several different approaches to building a quantum computer, including superconducting circuits, ion traps, and topological qubits. Each approach has its own advantages and challenges, and it is still unclear which approach will ultimately be the most successful.
Despite the many potential advantages of quantum computing, there are also significant challenges that must be overcome. In addition to the problem of quantum decoherence mentioned earlier, there is also the challenge of developing error correction codes that can protect quantum information from errors introduced by noise and other factors. Additionally, it is unclear how to program a quantum computer in a way that is intuitive and accessible to non-experts.
Overall, quantum computing is a rapidly evolving field with many exciting possibilities. It will be interesting to see how it develops in the coming years and what kind of impact it will have on various fields. |
Theory/X-ray trigonometric parallax
In visual astronomy the distance to nearby stars is calculated using the trigonometric parallax of their movements relative to background stars or galaxies that are immobile within the resolution of the telescope used. When X-ray astronomy detectors have sufficient resolution, it should be possible to measure the X-ray trigonometric parallax of nearby stars.
Distance measurement by parallax is a special case of the principle of triangulation, which states that one can solve for all the sides and angles in a network of triangles if, in addition to all the angles in the network, the length of at least one side has been measured. Thus, the careful measurement of the length of one baseline can fix the scale of an entire triangulation network. In parallax, the triangle is extremely long and narrow, and by measuring both its shortest side (the motion of the observer) and the small top angle (always less than 1 arcsecond, leaving the other two close to 90 degrees), the length of the long sides (in practice considered to be equal) can be determined.
Assuming the angle is small (see derivation below), the distance to an object (measured in parsecs) is the reciprocal of the parallax (measured in arcseconds): For example, the distance to Proxima Centauri is 1/0.7687=1.3009 parsecs (4.243 ly).
Parallax is a displacement or difference in the apparent position of an object viewed along two different lines of sight, and is measured by the angle or semi-angle of inclination between those two lines. Apparent displacement, or difference in the apparent position, of an object, caused by actual change (or difference) of position of the point of observation; spec. the angular amount of such displacement or difference of position, being the angle contained between the two straight lines drawn to the object from the two different points of view, and constituting a measure of the distance of the object."
In theoretical astronomy, whether the Earth moves or not, serving as a fixed point with which to measure movements by objects or entities, or there is a solar system with the Sun near its center, is a matter of simplicity and calculational accuracy. Copernicus's theory provided a strikingly simple explanation for the apparent retrograde motions of the planets—namely as parallactic displacements resulting from the Earth's motion around the Sun—an important consideration in Johannes Kepler's conviction that the theory was substantially correct. "[Kepler] knew that the tables constructed from the heliocentric theory were more accurate than those of Ptolemy" with the Earth at the center. Using a computer, this means that for competing programs, one written for each theory, the heliocentric program finishes first (for a mutually specified high degree of accuracy).
Orbits come in many shapes and motions. The simplest forms are a circle or an ellipse.
Stars "of spectral type S are characterized by unusual photospheric abundances which imply enrichment of the stellar surface by nucleosynthesis products. Spectroscopically, S stars are identified by bands of ZrO and LaO, replacing the TiO bands found in M stars. The spectra of S stars indicate strong enhancement of s-process elements in the photosphere (an accident of nomenclature - when the S spectral type was introduced, the slow neutron capture process was unknown). Abundance analyses show that in S stars, the C/O ratio is very close to unity [...], which also implies the presence of the products of nucleosynthesis at the stellar surface."
The "extrinsic S stars, includes stars which have elemental abundances which appear to have been altered by mass transfer from a binary companion."
The "intrinsic S stars, includes stars which have high luminosity and lie on the asymptotic giant branch (AGB). They show evidence that their compositional abnormalities are a result of nucleosynthesis and [perhaps] convective mixing to the surface. In particular, a defining characteristic which distinguishes the two types is that the intrinsic S stars contain technetium, while the extrinsic S stars do not."
"Both HCN and SiO have readily observable lines at 3 mm." χ Cyg is at a distance of 170 pc, but parallax measurements put it at D = 144 ± 25 pc (Stein 1991), parallax of 5.53 mas (198 ± 38) pc as of 2007 according to SIMBAD.
"All of these stars are bright at 2 µm and therefore have circumstellar dust [...] In one observing session, we obtained a 5 x 5 cross at HPBW spacings for the star χ Cyg in the CO J = 2-1 line. The data were relatively noisy because of limited integration time and weather conditions but do indicate that the envelope is extended with respect to the 25" telescope beam."
χ Cyg was "detected in the SiO v = 1 J = 2-1 maser emission line [...] χ Cyg has an unusually large dust/gas ratio of 9.0 x 10-3 [The dust-to-gas ratio for S stars detected in CO J = 1-0 is] 9.0 x 10-3 [...] For one star in our sample namely χ Cyg, the SiO J = 2-1, v = 0 emission has been mapped interferometrically [...] the SiO abundance at the base of the expanding envelope must be ~ 2 x 10-5 to explain the observed intensity distribution of the SiO emission. Thus, a substantial fraction (30%-50%) of all silicon atoms are in the form of gas phase SiO at the point where molecules are injected into the stellar wind. As the gas moves away from the star, the SiO is depleted from the gas, presumably by the process of grain formation, such that at radii of several x 1015 cm, the SiO gas phase abundance has fallen by > 90%. [...] χ Cyg, which has a relatively low mass-loss rate and hence low envelope opacity to UV photons."
Theoretical trigonometric parallaxEdit
The distance from a parallax measurement in pc is given by D = 1/p in arcseconds. The "standard deviation, which describes the 'typical' amount of error, is not negligible in comparison with the parallax. [...] the non-linearity of the relation causes the distribution function of the estimates [...] and thus the distribution function of the errors in distance [...] to be skewed."
For example, the most recent parallax for χ Cyg is 5.53 ± 1.10. The skewness results when the distance is calculated D = 1/0.00553 = 181 pc, but this ranges from 151 to 226 or 181 +45 and -30 pc.
"Extreme values are less likely to appear in a small sample than in a larger one. The reason is that with a small sample it is not very probable that any of the sample parallaxes will have an absolute value close to zero. As the sample size is increased the absolute value nearest to zero will continually decrease and the corresponding very large value of [D] will drive up the sample variance. Furthermore, this anomalous value will more and more seriously distort the value of the sample mean. The increasingly erratic behaviour of the sample mean as sample size is increased contrasts with the behaviour of the mean for a Gaussian distribution, which more and more closely approaches the population mean with increasing sample size."
"Physical entities are (almost) always characterized by a numerical parameter whose value can only be approached through some measuring process, by means of a statistical adjustment of a set of appropriate measurements. All that we can ever obtain is a best (in some prescribed sense) estimate of that value. By itself this estimate is, strictly speaking, meaningless, for it simply defines an interior point of a set of possible values; unless one has some estimate of the extremes of that set (to within some level of probability) in addition to the best estimate itself, one cannot exclude any value of that parameter. In many cases the probabilities of nearby values can be described by a function that is symmetric around the best estimate, and the parameter values bounding an interval (or a region if one happens to be dealing with multivariate analysis) which has a certain cumulative probability (usually 0.68) define an 'error bar' or error interval whose endpoints are equally distant from the best estimate."
The "positions of H2O masers in Sgr B2, a massive star forming region in the Galactic center, relative to an extragalactic radio source with the Very Long Baseline Array [have been determined]."
"Most luminosity and many mass estimates (e.g. based on column densities) scale as the square of the source distance, while masses based on total densities or orbit fitting (using proper motions) scale as the cube of distance."
"Most classical techniques involve determining the distributions of large numbers of bright sources that are assumed to be symmetrically distributed about the Galactic center. Distances to these sources are mostly photometric, which require accurate knowledge of absolute magnitudes and detailed calibration of the effects of metallicity, extinction and crowding, and combined systematic uncertainties arguably are at least 5% and possibly larger."
"For both Sgr B2M and Sgr B2N we used the background source, J1745–2820, as a position reference for the parallax and proper motion measurements [...] J1745–2820 is a well-studied extragalactic radio source (Bower, Backer & Sramek 2001) that has been used in previous VLBA astrometric observations of Sgr A* [...] J1745–2820 is projected only 20′ west of the maser sources, making it a nearly ideal position reference as it is very close to our maser targets, thereby canceling most systematic errors by a factor of 0.006 (the angular separation in radians). Also, as its offset is predominantly in the east-west direction, it samples similar source zenith angles as the target masers, which further reduces the effects of unmodeled atmospheric delays [...] We used a strong H2O maser spot as the interferometer phase-reference, because it was considerably stronger than the background source and could be detected on individual baselines in the available on-source time as short as 8 s."
The image at right is from the Very Large Array (VLA) "90-cm wavelength image, adapted from LaRosa et al. (2000), with the locations of Sgr A*, Sgr B2, and J1745–2820 indicated."
Nearby objects have a larger parallax than more distant objects when observed from different positions, so parallax can be used to determine distances.
"The majority of rotation measurements (RMs) have been made to pulsars in quadrant 1. An analysis of RMs by Rand & Lyne (1994) has the uniform component of the local magnetic field directed towards ℓ ~ 90° with a magnitude of ~ 1.4 μG. About 400 pc towards ℓ = 0° the field direction reverses."
In order to obtain parallax measurements, a number of 'fixed' sources must be available to determine the parallax motion. For a parallax measurement of Cygnus X-1 it is necessary to have "the positions of Cygnus X-1 and the background continuum sources, as well as their angular separations from Cygnus X-1."
At the right is an image of the two continuum 'fixed' radio sources J1953+3537 at J2000.0 R.A. 19h 53m 30.875712s Dec. +35° 37' 59.35927" and J1957+3338 at J2000.0 R.A. 19h 57m 40.549923s and Dec. +33° 38' 27.94339". These are needed to measure the parallax movement of Cygnus X-1.
Pulsars "PSRs J1744−1134 and J1024−0719 are two of only three isolated [millisecond pulsars] MSPs detected at X-ray energies. Since these MSPs have similar spin parameters [...], and presumably similar evolutionary histories, comparing their X-ray properties is useful in understanding the origin of their X-ray emission. With our revised distance estimates, the X-ray luminosity of PSR J1024−0719, Lx < 1 × 1029d2erg s−1, is less than a third of the previously accepted value, and for PSR J1744−1134, Lx = 4 × 1029d2erg s−1, is slightly higher than the previous value, and much larger than that of PSR J1024−0719".
"When detection of neutral hydrogen (HI) absorption of the pulsar signal is possible, an estimate, or at least a limit on the distance may be obtained using a Galactic rotation model".
"There is strong evidence for an elongated cavity in the neutral component of the [local insterstellar medium] LISM. This cavity surrounds the Sun and extends several hundred parsecs into quadrant 3 (Lucke 1978). The cavity appears as a region of low reddening extending 500 pc between ℓ = 210° and 255° and 1.5 kpc toward ℓ = 240°. Running counter to this is very heavy obscuration beyond ~100 pc in the first quadrant. Similarly, HI column densities derived from ultraviolet observations show a marked paucity in HI along LOSs directed towards ℓ = 230° (Frisch & York 1983; Paresce 1984). A similar morphology for this cavity is gleaned from NaI absorption measurements".
"The timing measurements of PSR J1744−1134 were made as part of an ongoing millisecond pulsar (MSP) timing project using the 64-m Parkes radio telescope. [...] Between 1995 January and 1999 January we made regular observations of PSR J1744−1134 at 0.66 and 1.4 GHz. At 0.66 GHz we used a dual linear polarization receiver. During the period of 1997 April until 1998 August we used the center beam of the Parkes Multibeam receiver system for dual linear polarization observations about 1.4 GHz. At other times observations at 1.4 GHz were made with a dual circular polarization H-OH receiver. The downconverted signal was fed into the Caltech correlator (Navarro 1994) where it was digitized and autocorrelated. The autocorrelation functions were folded at the topocentric pulse period, Fourier transformed, and compressed to 180 s sub-integrations, 8 frequency channels and 512 phase bins. A typical observation consisted of 8 contiguous sub-integrations. At 0.66 GHz, the signal was recorded over 32 MHz of bandwidth. Near 1.4 GHz we observed with two 128 MHz bands centered near 1.4 and 1.6 GHz respectively."
The "NRAO Very Large Array (VLA; program AR677) [has been used] in the most extended A configuration on 2008 September 28 to find background extragalactic sources as position references near the target H2O maser sources."
We "selected unresolved sources in the NRAO VLA Sky Survey (NVSS) catalog within ≈ 2°of each maser target and observed both background candidates and target masers at 8 and 22 GHz. For W51 Main/South, we found one new background source J1922+1504 for VLBA observations in addition to two known calibrators from the VLBA Calibrator Survey-1 (VCS1; Beasley et al. 2002), J1922+1530 and J1924+1540. However, at 0.3 mas resolution of the VLBA, we found the new source J1922+1504 displayed a resolved structure and was not useful for parallax measurements"
"Using a four-block setup allows the middle block of observing time to be available for phase-referencing rapid-switching scans between the target maser and background continuum sources when the source elevation was the highest at most stations. We measured multi-band delays and fringe rates, mostly due to un-modeled atmospheric propagation delays, to determine zenith delay errors as a function of time for each antenna."
"A PDM operates in the following four trigger modes: a) Normal mode with a GTU of 2.5 μs for routine data taking of EAS, b) Slow mode with a programmable GTU up to a few ms, for the study of meteorites and other atmospheric luminous phenomena, c) Detector calibration mode with a GTU value suitable for the calibration runs, and d) Lidar mode with a GTU of 200ns."
"The Laser of the [Japanese Experiment Module (JEM) of ISS for the Extreme Universe Space Observatory] JEM-EUSO Lidar releases the short pulse (less than 10ns, 20mJ/pulse) of the UV photons with 355nm in the frequency of 50Hz. The returned pulses are observed by the main telescope in a higher time resolution (200ns) of the Lidar mode of the [Photo-Detector Module] PDM. The slow data of the main telescope can also be used to determine the cloud top height by trigonometric parallax."
"Evidences of non-thermal X-ray emission and TeV gamma-rays from the supernova remnants (SNRs) has strengthened the hypothesis that primary Galactic cosmic-ray electrons are accelerated in SNRs."
"So far, the canonical distance to the Vela SNR has been taken to be 500pc, a value which was derived from the analysis of its angular diameter in comparison with the Cygnus Loop and IC443 (Milne 1968), and pulsar dispersion determination (Taylor & Cordes 1993). However, recent parallax measurements clearly indicate that the distance of 500pc is too large. Cha et al. (1999) obtained high resolution Ca-II absorption line toward 68 OB stars in the direction of the Vela SNR. The distances to these stars were determined by trigonometric parallax measurements with the Hipparcos satellite and spectroscopic parallaxes based upon photometric colors and spectral types. The distance to the Vela SNR is constrained to be 250 ± 30pc due to the presence of the Doppler spread Ca-II absorption line attributable to the remnant along some lines of sight. Caraveo et al. (2001) also applied high-resolution astrometry to the Vela pulsar (PSR B0833-45) V ∼ 23.6 optical counterpart. Using Hubble Space Telescope observations, they obtained the first optical measurement of the annual parallax of the Vela pulsar, yielding a distance of 294+76 −50 pc. Therefore, we calculate the electron flux adopting a distance of 300 pc to the Vela SNR."
"To further characterize the distribution of electrons in the LISM it is useful to relate their location to other interstellar features, such as bubbles, superbubbles, and clouds of neutral gas. There is strong evidence for an elongated cavity in the neutral component of the LISM. [...] There are several features of interest within this cavity. One of these is the local hot bubble (LHB): a volume encompassing the Sun distinguished by low neutral gas densities and a 106 K, soft X-ray emitting gas"
The "neutral hydrogen column density [has] a level of N(HI)= 5 × 1019 cm−2"
Abundance "estimates for neutron-capture elements, including lead (Pb), and nucleosynthesis models for their origin, in [the] carbon-rich, very metal-poor [star], [...] LP 706-7 [are reported]. [...] A Pb abundance is also derived for LP 706-7 by a re-analysis of a previously observed spectrum."
"LP 706-7 [was] observed with the University College London coudé échelle spectrograph (UCLES) and Tektronix 1024×1024 CCD at the Anglo-Australian Telescope. [...] the numbers of photons obtained around the Pb I λ4057 are [...] 3000 per 0.04Å pixel (S/N∼80) for [...] LP 706-7".
"The surface gravity of LP 706-7 (Norris et al. 1997a) was based on the requirement that Fe I and Fe II lines give identical abundances. More recently, a trigonometric parallax for this star has been published from the Hipparcos mission (ESA 1997), π = 15.15 ± 3.24 mas. Somewhat surprisingly, this surface gravity indicates an absolute magnitude MV = 8.0 ± 0.4, which is subluminous compared to both main sequence and subgiant Population II stars with Teff = 6000 K. A subgiant of MV = 3.0 or 4.0 would have a parallax of only 1.5 or 2.4 mas. Either the Hipparcos measurement of this star is significantly in error, or the star is far more bizarre than its CH-star status suggests. If the temperature estimate (based on photometric colors) and the Hipparcos parallax were both correct, we should be forced to infer a radius ten times smaller than for a subgiant and four times smaller than for a main-sequence star, but the surface gravity appears inconsistent with such a compact object (since g ∝ M/R2). It seems most likely that the Hipparcos parallax is simply incorrect, although an examination of the records (D. W. Evans, priv. comm.) revealed no concerns."
For "LP 706-7, because radial-velocity variations that might be expected for a star with a white-dwarf companion have not yet been detected (Norris et al. 1997a)."
We "found strong excesses of neutron-capture elements in the two metal-deficient satrs LP 625-44 and LP 706-7 with [Fe/H]= −2.7 and −2.74, respectively, which are interpreted as the result of s-process nucleosynthesis from a single site. Namely, the abundant material polluted by s-process nucleosynthesis dominates over the original surface abundances of neutron-capture elements. For instance, the Ba abundance in these two stars is a factor of several hundred times higher than the general trend of model predictions at [Fe/H]= −2.7. Even the abundance of Eu, which is usually interpreted as a signature of the r process, but should also be produced by the s-process as well, is enhanced by more than a factor of 10 in these two stars. Therefore, the neutron-capture elements in these two stars should present almost pure products of s-process nucleosynthesis at low metallicity. The exceptions to this are the abundances of Sr and Y in LP 706-7, which show no distinct excess. Therefore, the contribution of the s-process to these two elements may not be significant for this star."
There "is no evidence of binarity for [...] LP 706-7 (Norris et al. 1997a)."
The "precise mechanism for chemical mixing of protons from the hydrogen-rich envelope into the 12C-rich layer is still unknown, even for stars with solar metallicity, despite several theoretical efforts (Herwig et al. 1997; Langer et al. 1999). This makes it even harder to understand the peculiar abundance pattern of the s-process elements found in carbon-rich, metal deficient stars such as LP 625-44 and LP 706-7."
What "physical conditions are necessary to reproduce the observed s-process abundance profile of LP 625-44 and LP 706-7 without adopting any specific stellar model."
As "long as the same neutron exposure is adopted, the abundance patterns of LP 625-44 and LP 706-7 are reproduced with equivalent reduced χ2 values, even in extreme conditions of very high neutron density, Nn ≳ 1011cm−3. These parameter values simulate, more or less, the s-process conditions expected during the thermal pulse phase (Iben 1977)."
"Almost all elements, except for Pb, were found to be made in the first neutron exposure. Even the lead abundance converges after about three recurrent neutron exposures. This is consistent with the small overlap factor, r ≈ 0.1, deduced in our best-fit model. [...] fixed neutron exposure τ = 0.71 for LP 625-44. The observed Pb/Ba ratio is reproduced in the few-pulse model only for a small overlap factor, r ≲ 0.2, while the Ba/Sr ratio is rather insensitive to r and allows for a wider range, r ≲ 0.65. The Pb abundance is so sensitive to r that large r-values (0.2 ≲ r) are almost entirely excluded [...] This is a characteristic feature of the s-process pattern observed in LP 625-44 and LP 706-7."
"The ratio is slightly higher in LP 706-7, [Pb/Ba] = +0.27 ± 0.24. This may indicate that a range of 13C amounts is indeed required in the most metal-poor AGB stars, as well as for the moderately metal-poor ones."
"The Hummer & Mihalas theory is used to describe the non-ideal effects due to perturbations on the absorber from protons and electrons. We use a truncation of the electric microfield distribution in the quasi-static proton broadening to take into account the fact that high electric microfields dissociate the upper state of a transition."
"For the lower Lyman lines (Lα, Lβ, and Lγ), close range collisions of the absorber with hydrogen atoms and protons cause the appearance of important satellites in the wings of the lines that are visible up to Teff ∼ 30, 000 K".
"The Stark effect is defined as the shifting — or splitting — of spectral lines under the action of an electric field. [In] the atmospheric plasma a local electric microfield [is] due to protons that is constant with time."
"Classically, the sum of the electric potentials of the absorber and the nearby protons only allows bound states in the local potential minima up to a certain energy, called the saddle point."
"The most reliable independent observational constraint for DA white dwarfs comes from trigonometric parallax measurements. [...] there exists a very good correlation between spectroscopically based photometric distance estimates and those derived from trigonometric parallaxes. Here we compare absolute visual magnitudes instead of distances. We first combine trigonometric parallax measurements with V magnitudes to derive MV (π) values. We then use the calibration of Holberg & Bergeron (2006) to obtain MV (spec) from spectroscopic measurements of Teff and log g. [...] the bright white dwarf 40 Eri B for which a very precise trigonometric parallax and visual magnitude have been measured by Hipparcos. These measurements yield MV = 11.01 ± 0.01, [...] atmospheric parameters determined from the spectroscopic technique remain in excellent agreement with the constraints imposed by trigonometric parallax measurements."
"The parallax distance of 357+43 −35 pc is over twice that derived from the dispersion measure using the Taylor & Cordes model for the Galactic electron distribution."
"The mean electron density in the path to the pulsar, ne = (8.8±0.9) × 10 −3 cm−3, is the lowest for any disk pulsar."
Comparing "the ne for PSR J1744−1134 with those for another 11 nearby pulsars with independent distance estimates[, ...] there is a striking asymmetry in the distribution of electrons in the local interstellar medium. The electron column densities for pulsars in the third Galactic quadrant are found to be systematically higher than for those in the first. The former correlate with the position of the well known local HI cavity in quadrant three. The excess electrons within the cavity may be in the form of HII clouds marking a region of interaction between the local hot bubble and a nearby superbubble."
The "pulsar distances provide information about the column density of free electrons along different lines-of-sight (LOSs). Since their discovery, pulsars have had their distances estimated primarily by measuring the dispersion delay between pulses arriving at two widely spaced frequencies. As this delay is a function of the integral of electron density along the LOS to the pulsar, or dispersion measure (DM), a model for the Galactic free electron distribution yields the pulsar’s distance".
A "measurement of the parallax of PSR J1744−1134 obtained by analysing pulse times of arrival (TOAs) spanning 4 years [is combined] with other similar measurements to study the electron distribution in the local ISM (LISM)."
Parallax "measurement of PSR J1744−1134 places it at a distance of 357+43 −35 pc. This distance is over twice the value of 166 pc derived from the DM using the Taylor & Cordes (1993) model. The improved distance measurement implies a mean electron density in the path to the pulsar of ne = (8.8±0.9) × 10−3 cm−3."
"When the distances to the sample of 12 local pulsars are projected onto the Galactic plane a striking asymmetry in the electron distribution becomes evident. [...] the LOS electron densities to pulsars in the third Galactic quadrant are systematically higher than those in the first. [...] The asymmetry may be more pronounced since the upper parallax limit of PSR B1929+10 sets an upper limit to its electron density (ne < 0.013 cm−3), while the Shklovskii effect implies a firm lower bound to the ne of PSR J1024−0719 (ne > 0.029 cm−3)."
The "large number of pulsars with lower than expected electron densities in quadrant 1 implies a dearth of ionized material in this quadrant at least out to ~ 1 kpc."
"In the case of the prompt release of electrons after the explosion, the flux from the Vela SNR is the largest among the known SNRs [...] The flux value is quite sensitive to the change of distance to Vela from 500pc to 300pc, since the solution for electron density yields a Gaussian distribution function of r [...]. The flux of electrons at a distance of 300pc is two orders of magnitude larger than at 500pc. [...] the Vela SNR is the most dominant source in the TeV region."
"An important example is the studies of the magneto-ionic medium in the Milky Way using the Faraday rotation measure RM and dispersion measure DM of pulsars – both are integrals along the line of sight involving interstellar magnetic field and thermal electron density."
"A pulsar’s own contribution to the observed RM is minor because the pulsar magnetosphere is populated by electron-positron pairs resulting in zero net Faraday rotation."
"Distance estimates now exist for a few hundreds of pulsars, resulting from three basic techniques: neutral hydrogen absorption (in combination with the Galactic rotation curve), trigonometric parallax and from associations with objects of known distance".
"The issue of autonomously detecting satellite and airplane tracks in images is by no means a new one. For decades, these tracks have been nothing more than a nuisance for astronomers–foreground artifacts that must be disposed of in the preprocessing of data [...] the Recognition by Adaptive Subdivision of Transformation Space algorithm removes satellite streaks directly from images using a geometric approach that assumes the tracks are straight lines [...] the Random Sampling and Consensus algorithm [allows] for postprocessing removal of curved tracks and scratches as well."
"While these streaks may be a source of noise in the field of astronomy, for applications such as the Space Surveillance Network (SSN), they are the signal. A track from a satellite or piece of debris, along with time-stamp information, allows the SSN to make an equatorial angles-only determination of its orbit. [A way of] obtaining the timestamp information [...] is to measure the start and end times of an exposure and extract the end points of the imaged track(s). [...] [For determining the range of an artificial satellite using its observed trigonometric parallax] the error in detecting the end points may very well dominate the other sources of error in the measurement ."
The purpose of the "Space-based Telescopes for Actionable Refinement of Ephemeris (STARE) [...] is to refine orbital information for satellites and debris by directly imaging them with CMOS imagers onboard a constellation of cube-satellites (CubeSats). The images acquired by a given sensor will be run through an algorithm in the onboard microprocessor that is tasked with extracting star and track end point coordinates and sending them to the ground (without the accompanying image). Since the attitude of the STARE satellites will not be precisely controlled, the telescopes may be rotating about the pointing axis."
"Once a contiguous set of pixels has been identified, it is characterized as a star, track, or unknown object (such as a delta or Compton scattered worm) based upon its ellipticity (e) and the number of pixels (N) it contains. These values are dependent on the optical system and detector used, but for STARE, a cut of e > 0.8 and N > 20 should effectively identify all real tracks. A perfectly straight track should have e = 1; the margin e = 0.8–1.0 allows for curvature and the possibility of overlapping stars or cosmic rays. The chance of a muon hit producing a track greater than 20 pixels long is extremely low."
Hot "coronae disappear slowly in the course of stellar evolution, since the evolutionary tracks are parallel to the revised X-ray dividing line and do not cross it. Furthermore, hybrid stars turn out to be quite common: they are just more massive, more active and maintain some coronal plasma beyond the onset of cool [stellar] winds."
"For the nearest (d < 18pc), MV is derived from the trigonometric parallax, and interstellar extinction can be neglected. For larger and trigonometrically less reliable distances, [use] the Wilson-Bappu magnitudes [...] For most of the more distant hybrid stars, extinction is non-negligible."
Evolutionary tracks [have been] updated "for the most recent opacities (i.e., OPAL), nuclear rates, neutrino losses and a refined equation of state".
"The launch of the Fermi Gamma-ray Space Observatory in June 2008 completely changed the status in studies of gamma-ray pulsars. The first published catalog of gamma-ray pulsars (Abdo et al. 2010) contains 46 gamma-ray pulsars including 8 millisecond pulsars, 21 young radio pulsars and 17 gamma-selected pulsars. After more than one and half years of all-sky survey observations by Fermi/LAT, more than 70 gamma-ray pulsars were discovered, including 25 gamma-selected pulsars (see reviews by Ray & Saz Parkinson 2010). High sensitivity of the Fermi/LAT makes a new era for pulsar discoveries, specially for the population of radio-quiet gamma-ray pulsars."
"Trigonometric parallax measurements of radio pulsars are the reliable method, but are only available for the nearby pulsars (< 0.4 kpc) specially for a few radio millisecond pulsars [...] For the Geminga pulsar, we estimate the distance of 0.19 ± 0.07 kpc which is well consistent with the distance value of 0.25+0.12 −0.06 kpc from the optical trigonometric parallax measurement (Faherty et al. 2007). [...] [For m]illisecond pulsars (MSPs) [...] their distances are generally measured by optical trigonometric parallax".
Trigonometric parallax may be somewhat wavelength dependent. Usually, it depends on resolution and using a periodic set of measurements where the diameter of the period is large enough to allow resolution. To attempt X-ray trigonometric parallax, a candidate star that is within resolution of at least one currently available or formerly available X-ray satellite is needed. If none qualify for even the closest star, then a calculation that is time based may work or a calculation using a greater periodic set is needed.
For the effort to succeed a collection of three to five very distant, unmoving X-ray sources must be available to determine the target's relative movement.
Since the idea of using X-ray astronomy satellites is novel and perhaps a bit premature, finding a relatively nearby source where the experiment may be performed may be itself a time-consuming task too soon for a course.
According to SIMBAD, V645 Cen (Proxima Centauri) is an X-ray source in the catalogs: 1E, 2E, 1ES, RE, RX, 1RXS, and [FS2003]. Catalogs 1E through 1ES are the Einstein satellite observations. Catalogs RE through 1RXS are the ROSAT satellite observations. Catalog [FS2003] is a systematic search for variability among ROSAT All-Sky Survey X-ray sources by B. Fuhrmeister and J. H. M. M. Schmitt in an article published in Astronomy and Astrophysics.
The X-ray resolutions of these two satellites are
- High Energy Astrophysics Observatory 2 (Einstein X-ray Observatory) - "a spatial resolution of ~1´." and
- ROSAT - "~ 2 arcsec spatial resolution (FWHM)".
The visual astronomy parallaxes (mas) on Proxima Centauri are 774.25 ± 2.08, according to SIMBAD. As the resolution of neither Earth-orbit satellite is within the parallax range it is unlikely that any sets of X-ray observations from these two satellites can resolve parallax movement by Proxima Centauri.
Perhaps one of the current X-ray satellites has sufficient resolution:
- AGILE - not stated so far,
- Chandra X-ray Observatory (Advanced X-ray Astrophysics Facility, (AXAF)) - Spatial resolution < 1 arcsec, HRC-I ~ 0.5 arcsec spatial resolution,
- Fermi Gamma-ray Space Telescope - not stated so far,
- INTEGRAL - spatial resolution 3´,
- MAXI - not stated so far,
- NuSTAR (Nuclear Spectroscopic Telescope Array Mission) - not stated so far,
- Suzaku (Astro-E2) - angular resolution of ~2´ (HPD); all gold-coated,
- Swift X-ray Telescope (XRT) - ~5 arcsec position accuracy, and
- X-ray Multi-Mirror Mission (XMM-Newton) - Spatial resolution 6" FWHM. None of these can perform trigonometric parallax at present, although Chandra may be able to estimate the parallax.
"Chandra and XMM-Newton observations of the red dwarf star Proxima Centauri have shown that its surface is in a state of turmoil. Flares, or explosive outbursts, occur almost continually. This behavior can be traced to Proxima Centauri's low mass, about a tenth that of the Sun. In the cores of low mass stars, nuclear fusion reactions that convert hydrogen to helium proceed very slowly, and create a turbulent, convective motion throughout their interiors. This motion stores up magnetic energy which is often released explosively in the star's upper atmosphere where it produces flares in X-rays and other forms of light."
"The same process produces X-rays on the Sun, but the magnetic energy is released in a less explosive manner through heating loops of gas, with occasional flares. The difference is due to the size of the convection zone, which in a more massive star such as the Sun, is smaller and closer to its surface."
"Red dwarfs are the most common type of star. They have masses between about 8% and 50% of the mass of the Sun. Though they are much dimmer than the Sun, they will shine for much longer - trillions of years in the case of Proxima Centauri, compared to the estimated 10 billion-year lifetime of the Sun."
"X-rays from Proxima Centauri are consistent with a point-like source. The extended X-ray glow is an instrumental effect. The nature of the two dots above the image is unknown - they could be background sources."
"Image is 1.5 arcmin across."
"The M8 brown dwarf 2MASSW J1207334-393254 (hereafter 2M1207A) is [...] a [...] brown dwarf [member] of the ~ 10 Myr old TW Hydrae Assocation (Webb et al. 1999). 2M1207A is a very-low-mass substellar analog to a classical T Tauri star: It has broad, variable Hα emission due to accretion [...], mid-infrared excess due to a disk [...], ultraviolet emission due to hot accreted gas and warm circumstellar molecular hydrogen gas [...], and forbidden oxygen emission due to an outflow[, but] it is not detected in X rays [...] or radio [...], so is apparently relatively magnetically inactive."
A "red companion (2M1207B), [is] 5 magnitudes fainter in the K band. Common proper motion confirms that this is a bound pair [...] with a separation of 773 ± 1.4mas. The secondary has a late-L spectral type (Mohanty et al. 2007). The inferred luminosity implies a mass ~ 5MJ [...], although Mohanty et al. (2007) suggest that the secondary is 8 ± 2 jupiter masses and viewed through an edge-on disk."
"Because the TWA is a relatively nearby, loose association there has been some confusion on the distance to the system. Chauvin et al. (2004) adopted a distance of 70 pc, on the basis of theoretical models of brown dwarf evolution. The Hipparcos distance of TW Hya itself is 56+8 −6pc (Perryman et al. 1997). Mamajek (2005) used the moving cluster distance method to estimate the distance to 2M1207A to be 53 ± 7 pc, while Song et al. (2006) used the same method, but an updated proper motion and a different group membership list to estimate 59 ± 7 pc. With uncertainties in the distance to the TW Hya group of ~ 15%, firm conclusions about the natures of 2M1207 A and B, as well as other members of the group, have been elusive.” Here we present the first trigonometric parallax for 2M1207A. We confirm that it is a member of the TW Hya Association and put ... constraints on the planet candidate 2M1207B."
"Observations of 2M1207A in the IKC band were obtained at the CTIO 0.9m telescope by the RECONS group via the SMARTS Consortium. There are 54 parallax frames obtained over 2.14 years. [...] The resulting relative parallax is πrel = 17.93 ± 1.03 mas. VRI photometry was obtained in July 2007 on five nights using the same telescope [...] We estimate the correction to absolute parallax to be 0.58 ± 0.05 mas on the basis of photometry of the seven reference stars [...] The absolute parallax is therefore 18.51 ± 1.03 mas, for a distance of 54.0+3.2 −2.8pc. The observed proper motion is 66.7 ± 1.5 masyr−1 at position angle θ = 250.0 ± 2.4 degrees."
"The distance and proper motion of 2M1207 is consistent with TWA membership. The position angle expected for motion towards [the] TWA convergent point is 251.4 degrees, consistent with the measured proper motion. Using [a] radial velocity of +11.2±2.0 km s−1 for 2M1207A, the (U,V,W) space velocities are (−8,−18,−4) kms−1, consistent with [the] centroid group value of (−10.2,−17.1,−5.1) kms−1. In particular, the measured distance rules out any association with the background Lower Centaurus Crux [other] measurements and our distance, the projected separation is 41.7 ± 2.3 A.U."
"2M1207A to be 24±6MJ brown dwarf. Because they used [the] value of 53 pc as the distance, this mass is not changed significantly by a distance increase of 2%: 2M1207A is best understood as a ~ 25MJ brown dwarf. The disk parameters [...] also remain unchanged because they used the same distance. The observed V − Ks = 8.00 ± 0.19 is consistent with the M8 spectral type and suggests the accretion rate at the time was ≲ 10−11M⊙yr−1 [...] Like the young M dwarf AU Mic (Gl 803), 2M1207A lies ~ 1.5 magnitudes above the main sequence in the MV vs V − K diagram".
"The usual procedure for analyzing 2M1207B is to assume a bolometric correction appropriate to late-L dwarfs, and then fit the luminosity to evolutionary models. [Estimates are] 5 ± 2MJ for 70 pc, [...] 5 ± 3MJ for 59 pc, and [...] 3 − 4MJ for 53 pc. The trigonometric parallax would therefore support the last two estimates. [...] H and K-band near-infrared spectra of 2M1207B were best fit by an effective temperature of 1600 ± 100K. However, for the 3 − 5MJ fits, the expected effective temperature is more like 1000 − 1200 K. [Perhaps] the best resolution is that 2M1207B is viewed through an edge-on gray disk, and that therefore it is more luminous than otherwise estimated. 2M1207B is then a 8 ± 2MJ planetary mass brown dwarf. The wide separation and mass ratio (q ≈ 0.2 − 0.3) suggests this planetary-mass object did not form through core accretion [...For] 2M1207B [...] it is red compared to field brown dwarfs, which can be attributed to having more dust in the photosphere."
"From the accurate trigonometric parallax [...], the effective temperature (Teff = 10, 900 K) and the stellar radius (R = 0.00368 R⊙) are directly determined from the broad-band spectral energy distribution — the parallax method. The effective temperature and surface gravity are also estimated independently from the simultaneous fitting of the observed Balmer line profiles with those predicted from pure-hydrogen model atmospheres— the spectroscopic method (Teff = 10, 760 K, log g = 9.46). The mass of LHS 4033 is then inferred from theoretical mass-radius relations appropriate for white dwarfs. The parallax method yields a mass estimate of 1.310–1.330M⊙, for interior compositions ranging from pure magnesium to pure carbon, respectively, while the spectroscopic method yields an estimate of 1.318–1.335 M⊙ for the same core compositions. This star is the most massive white dwarf for which a robust comparison of the two techniques has been made."
"LHS 4033 (WD 2349−031) is a white dwarf [that] has also been part of the Luyten Half Second (LHS) survey μ ≥ 0.6′′ yr−1 white dwarf sample [...] virtually all of which have been targeted for accurate trigonometric parallaxes at the U.S. Naval Observatory, for purposes of estimating the luminosity function of cool white dwarfs."
Here are the "optical and infrared photometry for LHS 4033":
- V = 16.98 ± 0.02
- B–V = +0.19 ± 0.03
- V – I = +0.07 ± 0.03
- J = 16.97 ± 0.05
- J–H = +0.05 ± 0.07
- H–K = −0.10 ± 0.07
- πabs (mas) = 33.9 ± 0.6
- μrel (mas yr−1) = 701.4 ± 0.2
PA (deg) = 66.3 ± 0.1 Distance (pc) = 29.5 ± 0.5 MV = 14.63 ± 0.04
"Since the spectral type of LHS 4033 is DA and non-magnetic, the mass may be estimated by fits to the Balmer lines (see, e.g., Bergeron et al. 1992) in a much more rigorous fashion. The surface gravity used with suitable evolutionary models yields independent determinations of the mass and radius. The effective temperature may also be estimated from broad-band photometry once the dominant atmospheric constituent is known. This, along with an accurate trigonometric parallax, permits a different estimate of the luminosity, radius, and mass (Bergeron et al. 2001). While it has been possible to compare the parameter determinations of these methods for limited samples of white dwarfs, it is particularly interesting to do so for a massive star."
"Trigonometric parallax observations were carried out over a 6.05 year interval (1997.76 – 2003.81) using the USNO 1.55 m Strand Astrometric Reflector equipped with a Tek2K CCD camera (Dahn 1997). The absolute trigonometric parallax and the relative proper motion and position angle derived from the 150 acceptable frames [...]. The parallax and apparent V magnitude then yield an absolute magnitude [...]."
Optical "spectroscopy was secured on 2003 October 1 using the Steward Observatory 2.3-m reflector telescope equipped with the Boller & Chivens spectrograph and a UV-flooded Texas Instrument CCD detector. The 4.5 arcsec slit together with the 600 lines mm−1 grating blazed at 3568 Å in first order provided a spectral coverage of 3120–5330 Å at an intermediate resolution of ~ 6 Å FWHM. The 3000 s integration yielded a signal-to-noise ratio around 55 in the continuum."
"We first assume log g = 8.0 and determine the effective temperature and the solid angle, which, combined with the distance D obtained from the trigonometric parallax measurement, yields directly the radius of the star R. The latter is then converted into mass using an appropriate mass-radius relation for white dwarf stars. Here we first make use of the mass-radius relation of Hamada & Salpeter (1961) for carbon-core configurations. This relation is preferred to the evolutionary models of Wood (1995) or those of Fontaine et al. (2001), which extend only up to 1.2 and 1.3M⊙, respectively. [...] In general, the value of log g obtained from the inferred mass and radius (g = GM/R2) will be different from our initial assumption of log g = 8.0, and the fitting procedure is thus repeated until an internal consistency in log g is achieved. The parameter uncertainties are obtained by propagating the error of the photometric and trigonometric parallax measurements into the fitting procedure."
Our "spectroscopic solution Teff = 10, 760 ± 150 K and log g = 9.46 ± 0.04, which translates into M = 1.335± 0.011 and R = 0.00358± 0.00019 R⊙ using the Hamada-Salpeter mass-radius relation for carbon-core configurations, is in excellent agreement with the solution obtained with the photometry and trigonometric parallax method. This is arguably the most massive white dwarf subjected to a rigorous mass determination [...]. Note that despite the extreme surface gravity of LHS 4033, the Hummer-Mihalas formalism used in the line profile calculations remains perfectly valid, since the density at the photosphere remains low (ρ ~ 10−5 g cm−3) as a result of the high opacity of hydrogen at these temperatures."
The "parallax method with the Mg configurations yields a mass of 1.310 M⊙ (instead of 1.330 when C configurations are used), while the spectroscopic method yields a mass of 1.318 M⊙ (instead of 1.335 M⊙)."
For "the prototype variable RR Lyr, [...] the parallax inferred [...] appears in close agreement with Hubble Space Telescope absolute parallax."
We "assumed an intrinsic luminosity of RR Lyr in the range log L/L⊙= 1.65–1.80 to cover current uncertainties of HB models. On this basis, we derived MK=−0.541 ± 0.062 mag and a ‘pulsation’ parallax πpuls= 3.858 ± 0.131 mas, in close agreement with the HST absolute value πabs= 3.82 ± 0.20 mas."
Avoiding "any assumption about the RR Lyr bolometric luminosity. Adopting log P=−0.2466 [...], K= 6.54 mag [...], [Fe/H]=−1.39 ± 0.15 [...] and V= 7.784 mag [...] [using]
gives log L= 1.642 ± 0.024 + 0.535AV. [For] each adopted extinction correction, one derives directly from the observed V−K colour the luminosity value to be inserted into"
"without using any evolutionary predictions. Once MK is determined from [the above equation], both the intrinsic distance modulus (μ0=K− 0.11AV−MK) and the absolute visual magnitude (MV=V−AV−μ0) can be determined easily."
Despite "the substantial improvement in the accuracy of the RR Lyr trigonometric parallax provided by HST, when compared with previous measurements [...] a sound empirical determination of its absolute magnitude is still hampered by the intrinsic uncertainty in the HST measurement, even if the interstellar extinction to RR Lyr was known firmly."
Both "V and K absolute magnitudes of RR Lyr itself, estimated via the pulsational approach, are in good agreement with the trigonometric parallax recently measured by HST [...]. This suggests that at least for the metallicity of RR Lyr ([Fe/H]=−1.39), the pulsational approach is consistent with direct distance determinations."
"Hipparcos trigonometric parallaxes and photometric data [are available] for about 40 bright carbon stars [...] Individual absolute visual and bolometric magnitudes, normal color indices [blue (B), violet (V)] (B − V)0, absorption values and distance moduli were determined. By comparison with stellar evolutionary tracks for initial mass 1 ≤ M/M⊙ ≤ 4 it is found that the majority of CH- and R-stars are on the giant and subgiant branches, but N-stars occupy a region −4 < MV < −1 and 1.6 < (B − V)0 < 3.6 and correspond to an advanced stage of thermally pulsing asymptotic branch giants."
"Using Hipparcos parallaxes and proper motions, three multiple stars with a carbon star component are examined. Hipparcos data confirms a physical link between W CMa and HD 54306 (B2V), both probable members of the association CMa OB1. Some stars are located below the subgiant branch for the mass 1M⊙ and a number of the N stars are below the theoretical limit for carbon stars on the AGB."
"The most straightforward method, i.e. through trigonometric parallaxes, has hitherto been of little value, owing to the considerable distances even to the nearest carbon stars, and the imperfectness of previous measuring methods [...] The situation has radically changed after the mission by the astrometric satellite Hipparcos. The mean error of about 1 mas – a characteristic value for parallaxes measured by Hipparcos – provides us with reliable distance estimates inside, say, the 0.5 kpc region around the Sun including some 100 carbon stars."
"Hipparcos also supplied us with precise photometric data, giving the mean brightness estimate from ~ 100 observations of each star, a circumstance, which because of the variability of carbon stars, is of special value. Here, a problem specific to carbon stars – stars with a peculiar spectral energy distribution –, to accurately correct ground-based photometry for atmospheric extinction, is irrelevant for Hipparcos data."
"Hipparcos results clearly confirm that the great range of observed scatter in the color index B−V is intrinsic and not caused by different amounts of interstellar reddening [...] A considerable stretch in the horizontal direction is a result of enhanced sensitivity of the color index (B−V)0 to small temperature changes in a cool extended atmospheres; also various degree of violet depression play a definite role."
"Observations of [SSSPM J2231-7514 and SSSPM J2231-7515 imaged at right in the violet band] were carried out using the Danish Faint Object Spectrograph (DFOSC) on the Danish 1.54m Telescope in La Silla. Data were taken during the nights starting June 19-20, 2001 (local time) in relatively good almost photometric conditions."
For the blue (B) band, "[A grism was] used, number-7 (3800-6800°A, 5250°A blaze, 1.65°A/pixel resolution – a 1800s and a 1300s spectroscopic observation). Two spectrophotometric standards were taken using [the] grism, LTT 7379 and LTT 9239 [...], as well as a large number of zero, flat (both for imaging and spectroscopy for all grisms used) and arc-lamp (for both grisms) calibration frames. For broad-band photometric calibrations, Landolt standard fields (Landolt 1992) were observed repeatedly [...] a spectrum of one of the objects, SSSPM J2231-7515, which was discovered independently, had already been observed half a year earlier. The spectrum of this star was observed with the EFOSC spectrograph on the ESO 3.6m telescope during the night of 2 December 2000. A slit width of 1.5 arcsec was used with grism number 1 (3185-10940°A, 4500°A blaze, 6.30°A/pixel resolution) for three exposures of 300s each."
A "wide pair (93 arcsec angular separation) of extremely cool (Teff < 4000 K) white dwarfs [have] a very large common proper motion (~1.9 arcsec/yr). The objects were discovered in a high proper motion survey in the poorly investigated southern sky region with δ < −60° using SuperCOSMOS Sky Survey (SSS) data. Both objects, SSSPM J2231-7514 and SSSPM J2231-7515, show featureless optical spectra. Fits of black-body models to the spectra yield effective temperatures of 3810 K and 3600 K, respectively for the bright (V = 16.60) and faint (V = 16.87) component. Both degenerates are much brighter than other recent discoveries of cool white dwarfs with comparable effective temperatures and/or [blue] BJ − R colours."
"After measuring photometric zeropoints in 6 standard fields (Landolt 1992), we adopted a zeropoint of 24.66 (for 1s exposure time and airmass of 1.45 – the airmass of our acquisition frames) in the Bessel V-band. Examination of the measured zeropoints shows that conditions were very close to photometric (with a hint of very weak cirrus in some cases). Using this zeropoint, we measured the (Vega) magnitudes of the two objects to be 16.60 and 16.87 in V (Bessel). We also measured the magnitude of the extra object, accidentally landing on our 2 arcsec wide slit [...] to be 16.86 [...]. Instrumental magnitude errors are small (1-2%), the main source of error is the determination of the zeropoint. We therefore estimate the overall accuracy of these magnitudes to be better than 5%."
"Spectroscopic frames (science and standard fields and calibration frames) were also bias and zero subtracted, trimmed and flatfielded (using different flat-field frames for the two grisms). The only complication was the removal of focal plane geometric distortions to improve the sky subtraction. This was done by tracing lines in the wavelength calibration frames. A smooth distortion map was fitted to the results and was applied to all frames."
"All objects on the slit were traced along the dispersion axis. Sky subtraction was done through fitting in a 35 arcsec wide band, centered on the object, excluding the central 16 arcsec region. A relatively wide aperture was defined for all objects, which includes all the flux (but degrades the S/N slightly). Object spectra were ‘optimally’ extracted within this aperture, with both cosmic-ray removal (based on photon statistics) and a weighted sum based on estimated signal-to-noise ratio."
"Wavelength calibration was done using He-Ne arc exposures. For the number-7 grism (bluer, higher resolution) the procedure worked quite well (0.06Å RMS, 0.15Å maximum deviation) in the wavelength range 3889-6717Å."
"A detailed photometric and spectroscopic analysis of all known cool white dwarfs (4000 K< Teff < 12000 K) with trigonometric parallax measurements [exists]. [One] cool white [dwarf] with [an] effective [temperature] below 4000 K [...], WD 0346+246, has a trigonometric parallax measurement"
"With the [...] temperatures [...] of 3810 K and 3600 K (3100 K to 3800 K), the newly discovered objects are comparable to the coolest known white dwarf WD0346+246 with 3750 K [...] for which a trigonometric parallax of 36±5 mas (28±4 pc) has been measured".
"If we assume our objects to have the same physical properties (temperature, mass, chemical composition) as WD0346+246, we can estimate their distance from a comparison of their apparent V magnitudes. With V = 19.06 [...], WD0346+246 is more than two magnitudes fainter than our objects (16.60 and 16.87), and consequently we get distance estimates of 9 pc and 10.2 pc. These distance estimates have the same relative uncertainty as for the comparison object, i.e. ±1.3 pc and ±1.5 pc, respectively, and rely on the assumption of identical physical properties, which is unlikely to be the case."
As of 2002, "there are only 11 cool white dwarfs with Teff < 5000 K and trigonometric parallaxes of less than 25 pc [...]. All presently known or suspected degenerates with Teff < 4000 K are at trigonometric or photometric distances of more than 25 pc".
"Extrasolar-planet searches that target very low-mass stars and brown dwarfs are hampered by intrinsic or instrumental limitations. Time series of astrometric measurements with precisions better than one milli-arcsecond can yield new evidence on the planet occurrence around these objects."
"Over a time-span of two years, we obtained I-band images of the target fields with the FORS2 camera at the Very Large Telescope. Using background stars as references, we monitored the targets’ astrometric trajectories, which allowed us to measure parallax and proper motions, set limits on the presence of planets, and to discover the orbital motions of two binary systems."
"We determined trigonometric parallaxes with an average accuracy of 0.09 mas (≃0.2 %), which resulted in a reference sample for the study of ultracool dwarfs at the M/L transition, whose members are located at distances of 9.5–40 pc."
Astrometric "observations with an accuracy of 120 µas over two years are feasible from the ground and can be used for a planet-search survey."
The image at right is "of DE1520-44 in 0.47" seeing showing the primary (A), its companion (B), and the faint background object (x)."
"The distance [to Cygnus X-1 is] 1.86 +0.12 or -0.11 kpc [...] obtained from a trigonometric parallax measurement using the Very Long Baseline Array."
"Cygnus X-1 was the first [black hole] BH candidate to be established via dynamical observations [...] We observed Cygnus X-1 and two background [continuum] sources over 10 hr tracks at five epochs: 2009 January 23, April 13, July 13, and October 31, and 2010 January 25. These dates well sample the peaks of the sinusoidal trigonometric parallax signature in both right ascension and declination. This sampling provides near maximum sensitivity for parallax detection and ensures that we can separate the secular proper motion (caused by projections of Galactic rotation as well as any peculiar motion of Cygnus X-1 and the Sun) from the sinusoidal parallax effect."
"Generally, data calibration followed similar procedures as for parallax observations of continuum sources in the Orion nebular cluster at 8.4 GHz (Menten et al. 2007). We placed observations of well-known strong sources near the beginning, middle, and end of the observations in order to monitor delay and electronic phase differences among the intermediate frequency bands. In practice, however, we found minimal drifts and used only a single scan of J2005+7752 for this calibration."
The image at the right is a radio signal from Cygnus X-1. It "shows a core–jet structure [...] The peak brightness of Cygnus X-1 ranged between 4 and 9 mJy beam−1 among our observations. [... Including the background continuum sources] restoring beams are in the lower left corner of each panel. All contour levels are integer multiples of 1 mJy beam-1 for Cygnus X-1 and 15 mJy beam-1 for the background sources."
"The change in position of Cygnus X-1 relative to a background continuum source was then modeled by the parallax sinusoid in both coordinates, completely determined by one parameter (the parallax), and a secular proper motion in each coordinate [...] The model included the effects of the ellipticity of Earth's orbit. The weighting of the data in the parallax and proper motion fitting is complicated because the formal position uncertainties are often unrealistically small, since a priori unknown sources of systematic error often dominate over random noise. The north–south components of relative positions often have greater uncertainty than the east–west components because the interferometer beams are generally larger in the north–south direction and systematic errors from unmodeled atmospheric delays usually are more strongly correlated with north–south positions [...] In order to allow for, and estimate the magnitude of, systematic errors, we assigned independent "error floors" to the east–west and north–south position data and added these floors in quadrature with the formal position-fitting uncertainties. Trial parallax and proper motion fits were conducted and a reduced χ2ν (per degree of freedom) statistic was calculated separately for the east–west and north–south residuals. The error floors were then adjusted iteratively so as to make χ2ν ≈ 1.0 for each coordinate. This procedure resulted in error floors of 0.08 and 0.16 mas for the eastward and northward position measurements, respectively. The magnitudes of these error floors are reasonably consistent with those obtained for other parallax targets observed at 8.4 GHz, e.g., Menten et al. (2007), and are probably dominated by unmodeled ionospheric delays. Any component of variability in the centroid of the core–jet position of Cygnus X-1 caused by changing jet opacity must be less than ≈0.1 mas."
"The very high electron column densities toward PSRs B0823+26 (0.055 cm−3), B0833−45 (0.270 cm−3), B0950+08 (0.023 cm−3) and J1024−0719 (ne > 0.029 cm−3) in quadrant 3 indicate the presence of dense ionized gas immediately beyond the LHB."
"If the dense ionized material is outside the LHB, the relative deficiency of electrons along the LOS to PSR J0437−4715 may be accounted for if the gas is clumped or has a non-uniform z-distribution. There is independent evidence for the existence of ionized clouds in this region."
"HII with ne = 0.07-0.14 cm−3 fills 40-90 pc of the β CMa LOS. [...] two clouds dominate this LOS."
"During the twentieth century, Eros data were used to determine various values for the ratio of the sun’s mass to that of the Earth-moon system. Observations of Eros were also well suited for determining the value of the astronomical unit using either the trigonometric parallax method or the dynamical method. The solar parallax is defined as the angle subtended by the Earth’s radius as seen from a distance of 1 AU (about 8.8 arc seconds). As an object like Eros closely approaches the Earth, its apparent position on the plane-of-sky is very sensitive to the observer’s location on the Earth’s surface so that position observations can be used to solve for the value of solar parallax; given the radius of the Earth (in km), the AU is then determined. The dynamical method of determining the AU depends upon using the astrometric observations of a close Earth approaching object to determine the system mass of the Earth and moon. The mass of the Earth-moon system is directly related to the solar parallax value through a combination of Kepler’s third law and the equation of the acceleration of gravity at a given geocentric distance. An interesting history of the various trigonometric and dynamical attempts to determine the solar parallax is given by Eugene Rabe . However, with the use of radar to observe the planets, the value of the AU has been refined beyond the accuracy possible using observations of close Earth approaching asteroids."
"Diurnal parallax is a parallax that varies with rotation of the Earth or with difference of location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars."
Lunar parallax (often short for lunar horizontal parallax or lunar equatorial horizontal parallax), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, it can exceed 1 degree.
The diagram (above) for stellar parallax can illustrate lunar parallax as well, if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and of a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth:- one of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment (that is, viewed along the vertical line in the diagram); and the other viewing position is a place from which the Moon can be seen on the horizon at the same moment (that is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram).
The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth -- equal to angle p in the diagram when scaled-down and modified as mentioned above.
The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth-Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 earth-radii, corresponding to horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. The Astronomical Almanac and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and formerly, of navigators), and the study of the way in which this coordinate varies with time forms part of lunar theory.
Parallax can also be used to determine the distance to the Moon.
One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degree, from which (with the solar apparent radius 0.25 degree) we get an Earth apparent radius of 1 degree. This yields for the Earth-Moon distance 60 Earth radii or 384,000 km. This procedure was first used by Aristarchus of Samos and Hipparchus, and later found its way into the work of Ptolemy. The diagram at right shows how daily lunar parallax arises on the geocentric and geostatic planetary model in which the Earth is at the centre of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed.
Another method is to take two pictures of the Moon at exactly the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:
Any distance to the Moon is often initially calculated as a multiple of the Earth radius .
Hubble Space TelescopeEdit
"Using NASA's Hubble Space Telescope, astronomers now can precisely measure the distance of stars up to 10,000 light-years away -- 10 times farther than previously possible."
"Astronomers have developed yet another novel way to use the 24-year-old space telescope by employing a technique called spatial scanning, which dramatically improves Hubble's accuracy for making angular measurements. The technique, when applied to the age-old method for gauging distances called astronomical parallax, extends Hubble's tape measure 10 times farther into space."
"By applying a technique [illustrated in the image at the right] called spatial scanning to an age-old method for gauging distances called astronomical parallax, scientists now can use NASA’s Hubble Space Telescope to make precision distance measurements 10 times farther into our galaxy than previously possible."
"This new capability is expected to yield new insight into the nature of dark energy, a mysterious component of space that is pushing the universe apart at an ever-faster rate."
"Parallax, a trigonometric technique, is the most reliable method for making astronomical distance measurements, and a practice long employed by land surveyors here on Earth. The diameter of Earth's orbit is the base of a triangle and the star is the apex where the triangle's sides meet. The lengths of the sides are calculated by accurately measuring the three angles of the resulting triangle."
"Astronomical parallax works reliably well for stars within a few hundred light-years of Earth. For example, measurements of the distance to Alpha Centauri, the star system closest to our sun, vary only by one arc second. This variance in distance is equal to the apparent width of a dime seen from two miles away."
"Stars farther out have much smaller angles of apparent back-and-forth motion that are extremely difficult to measure. Astronomers have pushed to extend the parallax yardstick ever deeper into our galaxy by measuring smaller angles more accurately."
"This new long-range precision was proven when scientists successfully used Hubble to measure the distance of a special class of bright stars called Cepheid variables, approximately 7,500 light-years away in the northern constellation Auriga. The technique worked so well, they are now using Hubble to measure the distances of other far-flung Cepheids."
"Such measurements will be used to provide firmer footing for the so-called cosmic "distance ladder." This ladder's "bottom rung" is built on measurements to Cepheid variable stars that, because of their known brightness, have been used for more than a century to gauge the size of the observable universe. They are the first step in calibrating far more distant extra-galactic milepost markers such as Type Ia supernovae."
"To make a distance measurement, two exposures of the target Cepheid star were taken six months apart, when Earth was on opposite sides of the sun. A very subtle shift in the star's position was measured to an accuracy of 1/1,000 the width of a single image pixel in Hubble's Wide Field Camera 3, which has 16.8 megapixels total. A third exposure was taken after another six months to allow for the team to subtract the effects of the subtle space motion of stars, with additional exposures used to remove other sources of error."
Nano-JASMINE is the Nano-Japan Astrometry Satellite Mission for INfrared Exploration.
"Nano-JASMINE is a 50cm class micro satellite that has space astrometry mission for the first time in Japan. Making a map of many stars, Nano-JASMINE will take us a knowledge of our Galaxy, and techniques of observation. Intelligent Space Systems Laboratory (the University of Tokyo) that took two CubeSats and one Micro-Satellite into orbit covers bus system, and National Astronomical Observatory of Japan (NAOJ) that plans more precise missions by larger satellites covers mission telescope."
"Gaia [NSSDC/COSPAR ID: 2013-074A] is a European Space Agency astronomy mission whose primary goals are to: (1) measure the positions and velocity of approximately one billion stars; (2) determine the brightness, temperature, composition, and motion through space of those stars; and, (3) create a three-dimensional map of the Milky Way galaxy."
At left is an image of the Milky Way galaxy depicted onto it the various targets and experimental regions for the ESA Gaia spacecraft.
"Repeatedly scanning the sky, Gaia will observe each of the billion stars an average of 70 times each over the five years. It will measure the position and key physical properties of each star, including its brightness, temperature and chemical composition."
At lower right is an image from an animation that has the Gaia spacecraft spinning slowly (four revolutions per day) to sweep its two telescopes across the entire celestial sphere.
- In order for X-ray trigonometric parallax measurements to succeed we either need a much larger spatial width for observing X-ray source apparent movement or try to incorporate
- Zeilik & Gregory 1998, p. 44.
- Benedict, G. Fritz et al. (1999). "Interferometric Astrometry of Proxima Centauri and Barnard's Star Using HUBBLE SPACE TELESCOPE Fine Guidance Sensor 3: Detection Limits for Substellar Companions". The Astronomical Journal 118 (2): 1086–1100. doi:10.1086/300975.
- Mutual inclination of two lines meeting in an angle, In: Shorter Oxford English Dictionary. 1968.
- Parallax. Oxford English Dictionary (Second Edition ed.). 1989.CS1 maint: extra text (link)
- Christopher M. Linton (2004). From Eudoxus to Einstein—A History of Mathematical Astronomy. Cambridge: Cambridge University Press. ISBN 978-0-521-82750-8.
- John H. Bieging and William B. Latter (February 20, 1994). "A Millimeter-Wavelength Survey of S Stars for Mass Loss and Chemistry". The Astrophysical Journal 422 (2): 765-82. doi:10.1086/173769. http://adsabs.harvard.edu/full/1994ApJ...422..765B. Retrieved 2014-04-18.
- Haywood Smith, Jr and Heinrich Eichhorn (1996). "On the estimation of distances from trigonometric parallaxes". Monthly Notices of the Royal Astronomical Society 281 (1): 211-18. http://mnras.oxfordjournals.org/content/281/1/211.short. Retrieved 2014-04-18.
- M. J. Reid, K. M. Menten, X. W. Zheng, A. Brunthaler, and Y. Xu (November 10, 2009). "A trigonometric parallax of Sgr B2". The Astrophysical Journal 705 (2): 1548. doi:10.1088/0004-637X/705/2/1548. http://iopscience.iop.org/0004-637X/705/2/1548. Retrieved 2014-04-19.
- M. Toscano, M. C. Britton, R. N. Manchester, M. Bailes, J. S. Sandhu, S. R. Kulkarni, S. B. Anderson (October 1, 1999). "Parallax of PSR J1744–1134 and the local interstellar medium". The Astrophysical Journal 523 (2): L171. doi:10.1086/312276. http://iopscience.iop.org/1538-4357/523/2/L171. Retrieved 2014-04-19.
- Mark J. Reid, Jeffrey E. McClintock, Ramesh Narayan, Lijun Gou, Ronald A. Remillard, and Jerome A. Orosz (December 1, 2011). "The trigonometric parallax of Cygnus X-1". The Astrophysical Journal 742 (2): 83-95. doi:10.1088/0004-637X/742/2/83. http://iopscience.iop.org/0004-637X/742/2/83. Retrieved 2014-04-18.
- M. Sato, M. J. Reid, A. Brunthaler, and K. M. Menten (September 10, 2010). "Trigonometric parallax of W51 main/south". The Astrophysical Journal 720 (2): 1055. doi:10.1088/0004-637X/720/2/1055. http://iopscience.iop.org/0004-637X/720/2/1055. Retrieved 2014-04-19.
- Toshikazu Ebisuzaki, H. Mase, Y. Takizawa, Y. Kawasaki, and K. Shinozaki, F. Kajino, N. Inoue, N. Sakaki, A. Santangelo, M. Teshima, E. Parizot and P. Gorodetzky, O. Catalano, P. Picozza and M. Casolino, M. Panasyuk and B.A. Khrenov, I.H. Park, T. Peter, G. Medina-Tanco, D. Rodriguez-Frias, J. Szabelski, and P. Bobik (28 June – 2 July 2010 2011). The JEM-EUSO Mission, In: XVI International Symposium on Very High Energy Cosmic Ray Interactions. Batavia, Illinois USA: ISVHECRI 2010. pp. 4. http://arxiv.org/pdf/1101.1909.pdf. Retrieved 2014-04-20.
- T. Kobayashi, Y. Komori, K. Yoshida, and J. Nishimura (2004). "The Most Likely Sources of High Energy Cosmic-Ray Electrons in Supernova Remnants". The Astrophysical Journal 601 (1): 340. http://iopscience.iop.org/0004-637X/601/1/340. Retrieved 2014-04-20.
- Wako Aoki, Sean G. Ryan, John E. Norris, Timothy C. Beers, Hiroyasu Ando, Nobuyuki Iwamoto, Toshitaka Kajino, Grant J. Mathews, Masayuki Y. Fujimoto (2001). "Neutron Capture Elements in s-Process-Rich, Very Metal-Poor Stars". The Astrophysical Journal 561 (1): 346. http://iopscience.iop.org/0004-637X/561/1/346. Retrieved 2014-04-20.
- P.-E. Tremblay and P. Bergeron (2009). "Spectroscopic Analysis of DA White Dwarfs: Stark Broadening of Hydrogen Lines including Non-ideal Effects". The Astrophysical Journal 696 (2): 1755. http://iopscience.iop.org/0004-637X/696/2/1755. Retrieved 2014-04-20.
- R. Stepanov, P. Frick, A. Shukurov and D. Sokoloff (August 2002). "Wavelet tomography of the Galactic magnetic field I. The method". Astronomy & Astrophysics 391 (08): 361-8. doi:10.1051/0004-6361:20020552. http://adsabs.harvard.edu/abs/2002A%26A...391..361S. Retrieved 2014-04-20.
- Lance M. Simms (June 29, 2011). "Autonomous subpixel satellite track end point determination for space-based images". Applied Optics 50 (22): D1-6. http://www.opticsinfobase.org/abstract.cfm?uri=ao-50-22-D1. Retrieved 2014-04-20.
- M. Hünsch and K.-P. Schröder (May 1996). "The revised X-ray dividing line: new light on late stellar activity". Astronomy & Astrophysics 309 (05): L51-4. http://articles.adsabs.harvard.edu/full/1996A%26A...309L..51H. Retrieved 2014-04-20.
- W. Wang (July 2011). "Possible distance indicators in gamma-ray pulsars". Research in Astronomy and Astrophysics 11 (7): 824-. doi:10.1088/1674-4527/11/7/007. http://iopscience.iop.org/1674-4527/11/7/007. Retrieved 2014-04-20.
- Heasarc (April 18, 2014). Einstein (HEAO-2). Greenbelt, Maryland USA: NASA GSFC. Retrieved 2014-04-18.
- Robert Petre (May 14, 2004). ROSAT : The Roentgen Satellite. Greenbelt, Maryland USA: NASA GSFC. Retrieved 2014-04-18.
- B. Wargelin and J. Drake (December 23, 2009). Proxima Centauri: The Nearest Star to the Sun. 60 Garden Street, Cambridge, MA 02138 USA: Harvard-Smithsonian Center for Astrophysics. Retrieved 2014-04-18.CS1 maint: location (link)
- John E. Gizis, Wei-Chun Jao, John P. Subasavage, and Todd J. Henry (November 1, 2007). "The trigonometric parallax of the brown dwarf planetary system 2MASSW J1207334–393254". The Astrophysical Journal 669 (1): L45. doi:10.1086/523271. http://iopscience.iop.org/1538-4357/669/1/L45. Retrieved 2014-04-20.
- Conard C. Dahn, P. Bergeron, James Liebert, Hugh C. Harris, Blaise Canzian, S. K. Leggett, and S. Boudreault (April 10, 2004). "Analysis of a Very Massive DA White Dwarf via the Trigonometric Parallax and Spectroscopic Methods". The Astrophysical Journal 605 (1): 400. doi:10.1086/382208. http://iopscience.iop.org/0004-637X/605/1/400. Retrieved 2014-04-20.
- G. Bono, F. Caputo, V. Castellani, M. Marconi, J. Storm, and S. Degl'Innocenti (2003). "A pulsational approach to near-infrared and visual magnitudes of RR Lyr stars". Monthly Notices of the Royal Astronomical Society 344 (4): 1097-106. doi:10.1046/j.1365-8711.2003.06878.x. http://mnras.oxfordjournals.org/content/344/4/1097.full. Retrieved 2014-04-20.
- A. Alksnis, A. Balklavs, U. Dzervitis, and I. Eglitis (1998). "Absolute magnitudes of carbon stars from Hipparcos parallaxes". Astronomy & Astrophysics 338: 209-16. http://articles.adsabs.harvard.edu/cgi-bin/nph-iarticle_query?1998A%26A...338..209A&data_type=PDF_HIGH&whole_paper=YES&type=PRINTER&filetype=.pdf. Retrieved 2014-04-20.
- R.-D. Scholz, G. P. Szokoly and M. Andersen, R. Ibata, and M. J. Irwin (2002). "A New Wide Pair of Cool White Dwarfs in the Solar Neighborhood". The Astrophysical Journal 565 (1): 539. http://iopscience.iop.org/0004-637X/565/1/539. Retrieved 2014-04-20.
- J. Sahlmann, P. F. Lazorenko, D. Ségransan, E. L. Martín, M. Mayor, D. Queloz, and S. Udry (in press 2014). "Astrometric planet search around southern ultracool dwarfs I. First results, including parallaxes of 20 M8–L2 dwarfs". Astronomy & Astrophysics: 19. http://arxiv.org/abs/1403.1275. Retrieved 2014-04-17.
- Donald K. Yeomans (1995). "Asteroid 433 Eros: The Target Body of the NEAR Mission". Journal of the Astronautical Sciences (Pasadena, California USA: NASA JPL). http://trs-new.jpl.nasa.gov/dspace/bitstream/2014/31429/1/95-1108.pdf. Retrieved 2014-04-18.
- P. Kenneth Seidelmann (2005). Explanatory Supplement to the Astronomical Almanac. University Science Books. pp. 123–125. ISBN 1891389459.
- Cesare Barbieri (2007). Fundamentals of astronomy. CRC Press. pp. 132–135. ISBN 0750308869.
- Astronomical Almanac e.g. for 1981, section D
- Astronomical Almanac, e.g. for 1981: see Glossary; for formulae see Explanatory Supplement to the Astronomical Almanac, 1992, p.400
- Gutzwiller, Martin C. (1998). "Moon-Earth-Sun: The oldest three-body problem". Reviews of Modern Physics 70 (2): 589. doi:10.1103/RevModPhys.70.589.
- J.D. Harrington and Ray Villard (April 10, 2014). NASA's Hubble Extends Stellar Tape Measure 10 Times Farther Into Space. Washington, DC USA: NASA Headquarters. Retrieved 2014-04-16.
- Adam Riess (April 10, 2014). NASA's Hubble Extends Stellar Tape Measure 10 Times Farther Into Space. Washington, DC USA: NASA Headquarters. Retrieved 2014-04-16.
- Intelligent Space Systems (May 2008). Mapping the Galaxy Nano-JASMINE. Tokyo, Japan: The University of Tokyo. Retrieved 2014-04-16.
- Ed Grayzeck (August 16, 2013). Gaia. Washington, DC USA: National Space Science Data Center, NASA. Retrieved 2014-01-07.
- C. Carreau (December 19, 2013). ESA PR 44-2013: Liftoff for ESA's Billion-Star Surveyor. European Space Agency. Retrieved 2014-01-07.
- Gaia Focal Plane. ESA Science and Technology.
- Bing Advanced search
- Google Books
- Google scholar Advanced Scholar Search
- International Astronomical Union
- Lycos search
- NASA/IPAC Extragalactic Database - NED
- NASA's National Space Science Data Center
- Office of Scientific & Technical Information
- Questia - The Online Library of Books and Journals
- SAGE journals online
- The SAO/NASA Astrophysics Data System
- Scirus for scientific information only advanced search
- SDSS Quick Look tool: SkyServer
- SIMBAD Astronomical Database
- SIMBAD Web interface, Harvard alternate
- Spacecraft Query at NASA.
- Taylor & Francis Online
- Universal coordinate converter
- Wiley Online Library Advanced Search
- Yahoo Advanced Web Search |
By the end of this section, you will be able to:
- Explain the functions of the spinal cord
- Identify the hemispheres and lobes of the brain
- Describe the types of techniques available to clinicians and researchers to image or scan the brain
The brain is a remarkably complex organ comprised of billions of interconnected neurons and glia. It is a bilateral, or two-sided, structure that can be separated into distinct lobes. Each lobe is associated with certain types of functions, but, ultimately, all of the areas of the brain interact with one another to provide the foundation for our thoughts and behaviors. In this section, we discuss the overall organization of the brain and the functions associated with different brain areas, beginning with what can be seen as an extension of the brain, the spinal cord.
The Spinal Cord
It can be said that the spinal cord is what connects the brain to the outside world. Because of it, the brain can act. The spinal cord is like a relay station, but a very smart one. It not only routes messages to and from the brain, but it also has its own system of automatic processes, called reflexes.
The top of the spinal cord merges with the brain stem, where the basic processes of life are controlled, such as breathing and digestion. In the opposite direction, the spinal cord ends just below the ribs—contrary to what we might expect, it does not extend all the way to the base of the spine.
The spinal cord is functionally organized in 30 segments, corresponding with the vertebrae. Each segment is connected to a specific part of the body through the peripheral nervous system. Nerves branch out from the spine at each vertebra. Sensory nerves bring messages in; motor nerves send messages out to the muscles and organs. Messages travel to and from the brain through every segment.
Some sensory messages are immediately acted on by the spinal cord, without any input from the brain. Withdrawal from heat and knee jerk are two examples. When a sensory message meets certain parameters, the spinal cord initiates an automatic reflex. The signal passes from the sensory nerve to a simple processing center, which initiates a motor command. Seconds are saved, because messages don’t have to go the brain, be processed, and get sent back. In matters of survival, the spinal reflexes allow the body to react extraordinarily fast.
The spinal cord is protected by bony vertebrae and cushioned in cerebrospinal fluid, but injuries still occur. When the spinal cord is damaged in a particular segment, all lower segments are cut off from the brain, causing paralysis. Therefore, the lower on the spine damage is, the fewer functions an injured individual loses.
The Two Hemispheres
The surface of the brain, known as the cerebral cortex, is very uneven, characterized by a distinctive pattern of folds or bumps, known as gyri (singular: gyrus), and grooves, known as sulci (singular: sulcus), shown in [link]. These gyri and sulci form important landmarks that allow us to separate the brain into functional centers. The most prominent sulcus, known as the longitudinal fissure, is the deep groove that separates the brain into two halves or hemispheres: the left hemisphere and the right hemisphere.
There is evidence of some specialization of function—referred to as lateralization—in each hemisphere, mainly regarding differences in language ability. Beyond that, however, the differences that have been found have been minor. What we do know is that the left hemisphere controls the right half of the body, and the right hemisphere controls the left half of the body.
The two hemispheres are connected by a thick band of neural fibers known as the corpus callosum, consisting of about 200 million axons. The corpus callosum allows the two hemispheres to communicate with each other and allows for information being processed on one side of the brain to be shared with the other side.
Normally, we are not aware of the different roles that our two hemispheres play in day-to-day functions, but there are people who come to know the capabilities and functions of their two hemispheres quite well. In some cases of severe epilepsy, doctors elect to sever the corpus callosum as a means of controlling the spread of seizures ([link]). While this is an effective treatment option, it results in individuals who have split brains. After surgery, these split-brain patients show a variety of interesting behaviors. For instance, a split-brain patient is unable to name a picture that is shown in the patient’s left visual field because the information is only available in the largely nonverbal right hemisphere. However, they are able to recreate the picture with their left hand, which is also controlled by the right hemisphere. When the more verbal left hemisphere sees the picture that the hand drew, the patient is able to name it (assuming the left hemisphere can interpret what was drawn by the left hand).
Much of what we know about the functions of different areas of the brain comes from studying changes in the behavior and ability of individuals who have suffered damage to the brain. For example, researchers study the behavioral changes caused by strokes to learn about the functions of specific brain areas. A stroke, caused by an interruption of blood flow to a region in the brain, causes a loss of brain function in the affected region. The damage can be in a small area, and, if it is, this gives researchers the opportunity to link any resulting behavioral changes to a specific area. The types of deficits displayed after a stroke will be largely dependent on where in the brain the damage occurred.
Consider Theona, an intelligent, self-sufficient woman, who is 62 years old. Recently, she suffered a stroke in the front portion of her right hemisphere. As a result, she has great difficulty moving her left leg. (As you learned earlier, the right hemisphere controls the left side of the body; also, the brain’s main motor centers are located at the front of the head, in the frontal lobe.) Theona has also experienced behavioral changes. For example, while in the produce section of the grocery store, she sometimes eats grapes, strawberries, and apples directly from their bins before paying for them. This behavior—which would have been very embarrassing to her before the stroke—is consistent with damage in another region in the frontal lobe—the prefrontal cortex, which is associated with judgment, reasoning, and impulse control.
The two hemispheres of the cerebral cortex are part of the forebrain ([link]), which is the largest part of the brain. The forebrain contains the cerebral cortex and a number of other structures that lie beneath the cortex (called subcortical structures): thalamus, hypothalamus, pituitary gland, and the limbic system (collection of structures). The cerebral cortex, which is the outer surface of the brain, is associated with higher level processes such as consciousness, thought, emotion, reasoning, language, and memory. Each cerebral hemisphere can be subdivided into four lobes, each associated with different functions.
Lobes of the Brain
The four lobes of the brain are the frontal, parietal, temporal, and occipital lobes ([link]). The frontal lobe is located in the forward part of the brain, extending back to a fissure known as the central sulcus. The frontal lobe is involved in reasoning, motor control, emotion, and language. It contains the motor cortex, which is involved in planning and coordinating movement; the prefrontal cortex, which is responsible for higher-level cognitive functioning; and Broca’s area, which is essential for language production.
People who suffer damage to Broca’s area have great difficulty producing language of any form ([link]). For example, Padma was an electrical engineer who was socially active and a caring, involved mother. About twenty years ago, she was in a car accident and suffered damage to her Broca’s area. She completely lost the ability to speak and form any kind of meaningful language. There is nothing wrong with her mouth or her vocal cords, but she is unable to produce words. She can follow directions but can’t respond verbally, and she can read but no longer write. She can do routine tasks like running to the market to buy milk, but she could not communicate verbally if a situation called for it.
Probably the most famous case of frontal lobe damage is that of a man by the name of Phineas Gage. On September 13, 1848, Gage (age 25) was working as a railroad foreman in Vermont. He and his crew were using an iron rod to tamp explosives down into a blasting hole to remove rock along the railway’s path. Unfortunately, the iron rod created a spark and caused the rod to explode out of the blasting hole, into Gage’s face, and through his skull ([link]). Although lying in a pool of his own blood with brain matter emerging from his head, Gage was conscious and able to get up, walk, and speak. But in the months following his accident, people noticed that his personality had changed. Many of his friends described him as no longer being himself. Before the accident, it was said that Gage was a well-mannered, soft-spoken man, but he began to behave in odd and inappropriate ways after the accident. Such changes in personality would be consistent with loss of impulse control—a frontal lobe function.
Beyond the damage to the frontal lobe itself, subsequent investigations into the rod’s path also identified probable damage to pathways between the frontal lobe and other brain structures, including the limbic system. With connections between the planning functions of the frontal lobe and the emotional processes of the limbic system severed, Gage had difficulty controlling his emotional impulses.
However, there is some evidence suggesting that the dramatic changes in Gage’s personality were exaggerated and embellished. Gage’s case occurred in the midst of a 19th century debate over localization—regarding whether certain areas of the brain are associated with particular functions. On the basis of extremely limited information about Gage, the extent of his injury, and his life before and after the accident, scientists tended to find support for their own views, on whichever side of the debate they fell (Macmillan, 1999).
The brain’s parietal lobe is located immediately behind the frontal lobe, and is involved in processing information from the body’s senses. It contains the somatosensory cortex, which is essential for processing sensory information from across the body, such as touch, temperature, and pain. The somatosensory cortex is organized topographically, which means that spatial relationships that exist in the body are maintained on the surface of the somatosensory cortex ([link]). For example, the portion of the cortex that processes sensory information from the hand is adjacent to the portion that processes information from the wrist.
The temporal lobe is located on the side of the head (temporal means “near the temples”), and is associated with hearing, memory, emotion, and some aspects of language. The auditory cortex, the main area responsible for processing auditory information, is located within the temporal lobe. Wernicke’s area, important for speech comprehension, is also located here. Whereas individuals with damage to Broca’s area have difficulty producing language, those with damage to Wernicke’s area can produce sensible language, but they are unable to understand it ([link]).
The occipital lobe is located at the very back of the brain, and contains the primary visual cortex, which is responsible for interpreting incoming visual information. The occipital cortex is organized retinotopically, which means there is a close relationship between the position of an object in a person’s visual field and the position of that object’s representation on the cortex. You will learn much more about how visual information is processed in the occipital lobe when you study sensation and perception.
Other Areas of the Forebrain
Other areas of the forebrain, located beneath the cerebral cortex, include the thalamus and the limbic system. The thalamus is a sensory relay for the brain. All of our senses, with the exception of smell, are routed through the thalamus before being directed to other areas of the brain for processing ([link]).
The limbic system is involved in processing both emotion and memory. Interestingly, the sense of smell projects directly to the limbic system; therefore, not surprisingly, smell can evoke emotional responses in ways that other sensory modalities cannot. The limbic system is made up of a number of different structures, but three of the most important are the hippocampus, the amygdala, and the hypothalamus ([link]). The hippocampus is an essential structure for learning and memory. The amygdala is involved in our experience of emotion and in tying emotional meaning to our memories. The hypothalamus regulates a number of homeostatic processes, including the regulation of body temperature, appetite, and blood pressure. The hypothalamus also serves as an interface between the nervous system and the endocrine system and in the regulation of sexual motivation and behavior.
The Case of Henry Molaison (H.M.)
In 1953, Henry Gustav Molaison (H. M.) was a 27-year-old man who experienced severe seizures. In an attempt to control his seizures, H. M. underwent brain surgery to remove his hippocampus and amygdala. Following the surgery, H.M’s seizures became much less severe, but he also suffered some unexpected—and devastating—consequences of the surgery: he lost his ability to form many types of new memories. For example, he was unable to learn new facts, such as who was president of the United States. He was able to learn new skills, but afterward he had no recollection of learning them. For example, while he might learn to use a computer, he would have no conscious memory of ever having used one. He could not remember new faces, and he was unable to remember events, even immediately after they occurred. Researchers were fascinated by his experience, and he is considered one of the most studied cases in medical and psychological history (Hardt, Einarsson, & Nader, 2010; Squire, 2009). Indeed, his case has provided tremendous insight into the role that the hippocampus plays in the consolidation of new learning into explicit memory.
Midbrain and Hindbrain Structures
The midbrain is comprised of structures located deep within the brain, between the forebrain and the hindbrain. The reticular formation is centered in the midbrain, but it actually extends up into the forebrain and down into the hindbrain. The reticular formation is important in regulating the sleep/wake cycle, arousal, alertness, and motor activity.
The substantia nigra (Latin for “black substance”) and the ventral tegmental area (VTA) are also located in the midbrain ([link]). Both regions contain cell bodies that produce the neurotransmitter dopamine, and both are critical for movement. Degeneration of the substantia nigra and VTA is involved in Parkinson’s disease. In addition, these structures are involved in mood, reward, and addiction (Berridge & Robinson, 1998; Gardner, 2011; George, Le Moal, & Koob, 2012).
The hindbrain is located at the back of the head and looks like an extension of the spinal cord. It contains the medulla, pons, and cerebellum ([link]). The medulla controls the automatic processes of the autonomic nervous system, such as breathing, blood pressure, and heart rate. The word pons literally means “bridge,” and as the name suggests, the pons serves to connect the brain and spinal cord. It also is involved in regulating brain activity during sleep. The medulla, pons, and midbrain together are known as the brainstem.
The cerebellum (Latin for “little brain”) receives messages from muscles, tendons, joints, and structures in our ear to control balance, coordination, movement, and motor skills. The cerebellum is also thought to be an important area for processing some types of memories. In particular, procedural memory, or memory involved in learning and remembering how to perform tasks, is thought to be associated with the cerebellum. Recall that H. M. was unable to form new explicit memories, but he could learn new tasks. This is likely due to the fact that H. M.’s cerebellum remained intact.
What Do You Think?: Brain Dead and on Life Support
What would you do if your spouse or loved one was declared brain dead but his or her body was being kept alive by medical equipment? Whose decision should it be to remove a feeding tube? Should medical care costs be a factor?
On February 25, 1990, a Florida woman named Terri Schiavo went into cardiac arrest, apparently triggered by a bulimic episode. She was eventually revived, but her brain had been deprived of oxygen for a long time. Brain scans indicated that there was no activity in her cerebral cortex, and she suffered from severe and permanent cerebral atrophy. Basically, Schiavo was in a vegetative state. Medical professionals determined that she would never again be able to move, talk, or respond in any way. To remain alive, she required a feeding tube, and there was no chance that her situation would ever improve.
On occasion, Schiavo’s eyes would move, and sometimes she would groan. Despite the doctors’ insistence to the contrary, her parents believed that these were signs that she was trying to communicate with them.
After 12 years, Schiavo’s husband argued that his wife would not have wanted to be kept alive with no feelings, sensations, or brain activity. Her parents, however, were very much against removing her feeding tube. Eventually, the case made its way to the courts, both in the state of Florida and at the federal level. By 2005, the courts found in favor of Schiavo’s husband, and the feeding tube was removed on March 18, 2005. Schiavo died 13 days later.
Why did Schiavo’s eyes sometimes move, and why did she groan? Although the parts of her brain that control thought, voluntary movement, and feeling were completely damaged, her brainstem was still intact. Her medulla and pons maintained her breathing and caused involuntary movements of her eyes and the occasional groans. Over the 15-year period that she was on a feeding tube, Schiavo’s medical costs may have topped $7 million (Arnst, 2003).
These questions were brought to popular conscience 25 years ago in the case of Terri Schiavo, and they persist today. In 2013, a 13-year-old girl who suffered complications after tonsil surgery was declared brain dead. There was a battle between her family, who wanted her to remain on life support, and the hospital’s policies regarding persons declared brain dead. In another complicated 2013–14 case in Texas, a pregnant EMT professional declared brain dead was kept alive for weeks, despite her spouse’s directives, which were based on her wishes should this situation arise. In this case, state laws designed to protect an unborn fetus came into consideration until doctors determined the fetus unviable.
Decisions surrounding the medical response to patients declared brain dead are complex. What do you think about these issues?
You have learned how brain injury can provide information about the functions of different parts of the brain. Increasingly, however, we are able to obtain that information using brain imaging techniques on individuals who have not suffered brain injury. In this section, we take a more in-depth look at some of the techniques that are available for imaging the brain, including techniques that rely on radiation, magnetic fields, or electrical activity within the brain.
Techniques Involving Radiation
A computerized tomography (CT) scan involves taking a number of x-rays of a particular section of a person’s body or brain ([link]). The x-rays pass through tissues of different densities at different rates, allowing a computer to construct an overall image of the area of the body being scanned. A CT scan is often used to determine whether someone has a tumor, or significant brain atrophy.
Positron emission tomography (PET) scans create pictures of the living, active brain ([link]). An individual receiving a PET scan drinks or is injected with a mildly radioactive substance, called a tracer. Once in the bloodstream, the amount of tracer in any given region of the brain can be monitored. As brain areas become more active, more blood flows to that area. A computer monitors the movement of the tracer and creates a rough map of active and inactive areas of the brain during a given behavior. PET scans show little detail, are unable to pinpoint events precisely in time, and require that the brain be exposed to radiation; therefore, this technique has been replaced by the fMRI as an alternative diagnostic tool. However, combined with CT, PET technology is still being used in certain contexts. For example, CT/PET scans allow better imaging of the activity of neurotransmitter receptors and open new avenues in schizophrenia research. In this hybrid CT/PET technology, CT contributes clear images of brain structures, while PET shows the brain’s activity.
Techniques Involving Magnetic Fields
In magnetic resonance imaging (MRI), a person is placed inside a machine that generates a strong magnetic field. The magnetic field causes the hydrogen atoms in the body’s cells to move. When the magnetic field is turned off, the hydrogen atoms emit electromagnetic signals as they return to their original positions. Tissues of different densities give off different signals, which a computer interprets and displays on a monitor. Functional magnetic resonance imaging (fMRI) operates on the same principles, but it shows changes in brain activity over time by tracking blood flow and oxygen levels. The fMRI provides more detailed images of the brain’s structure, as well as better accuracy in time, than is possible in PET scans ([link]). With their high level of detail, MRI and fMRI are often used to compare the brains of healthy individuals to the brains of individuals diagnosed with psychological disorders. This comparison helps determine what structural and functional differences exist between these populations.
Techniques Involving Electrical Activity
In some situations, it is helpful to gain an understanding of the overall activity of a person’s brain, without needing information on the actual location of the activity. Electroencephalography (EEG) serves this purpose by providing a measure of a brain’s electrical activity. An array of electrodes is placed around a person’s head ([link]). The signals received by the electrodes result in a printout of the electrical activity of his or her brain, or brainwaves, showing both the frequency (number of waves per second) and amplitude (height) of the recorded brainwaves, with an accuracy within milliseconds. Such information is especially helpful to researchers studying sleep patterns among individuals with sleep disorders.
The brain consists of two hemispheres, each controlling the opposite side of the body. Each hemisphere can be subdivided into different lobes: frontal, parietal, temporal, and occipital. In addition to the lobes of the cerebral cortex, the forebrain includes the thalamus (sensory relay) and limbic system (emotion and memory circuit). The midbrain contains the reticular formation, which is important for sleep and arousal, as well as the substantia nigra and ventral tegmental area. These structures are important for movement, reward, and addictive processes. The hindbrain contains the structures of the brainstem (medulla, pons, and midbrain), which control automatic functions like breathing and blood pressure. The hindbrain also contains the cerebellum, which helps coordinate movement and certain types of memories.
Individuals with brain damage have been studied extensively to provide information about the role of different areas of the brain, and recent advances in technology allow us to glean similar information by imaging brain structure and function. These techniques include CT, PET, MRI, fMRI, and EEG.
Self Check Questions
Critical Thinking Questions
1. Before the advent of modern imaging techniques, scientists and clinicians relied on autopsies of people who suffered brain injury with resultant change in behavior to determine how different areas of the brain were affected. What are some of the limitations associated with this kind of approach?
2. Which of the techniques discussed would be viable options for you to determine how activity in the reticular formation is related to sleep and wakefulness? Why?
Personal Application Questions
3. You read about H. M.’s memory deficits following the bilateral removal of his hippocampus and amygdala. Have you encountered a character in a book, television program, or movie that suffered memory deficits? How was that character similar to and different from H. M.?
1. The same limitations associated with any case study would apply here. In addition, it is possible that the damage caused changes in other areas of the brain, which might contribute to the behavioral deficits. Such changes would not necessarily be obvious to someone performing an autopsy, as they may be functional in nature, rather than structural.
2. The most viable techniques are fMRI and PET because of their ability to provide information about brain activity and structure simultaneously.
positron emission tomography (PET) scan |
What is a Randomized Controlled Trial (RCT)?
A randomized controlled trial (RCT) is a prospective experimental design that randomly assigns participants to an experimental or control group. RCTs are the gold standard for establishing causal relationships and ruling out confounding variables and selection bias. Researchers must be able to control who receives the treatments and who are the controls to use this design.
In this design, random assignment tends to equally distribute all subject characteristics that affect the outcome. In short, randomization balances the treatment and control groups at the beginning of a randomized controlled trial. The only difference between groups is the treatment condition itself. Consequently, the intervention likely caused any group differences researchers find when the RCT concludes.
Random assignment is crucial for ruling out other potentially explanatory factors that could have caused those outcome differences. This process in RCTs is so effective that it even works with potential confounders that the researchers don’t know about! Think age, lifestyle, or genetics. Learn more about Random Assignment in Experiments.
Scientists use randomized controlled trials most frequently in fields like medicine, psychology, and social sciences to rigorously test interventions and treatments.
In this post, learn how RCTs work, the various types, and their strengths and weaknesses.
Randomized Controlled Trial Example
Imagine testing a new drug against a placebo using a randomized controlled trial. We take a representative sample of 100 patients. 50 get the drug; 50 get the placebo. Who gets what? It’s random! Perhaps we flip a coin. For more complex designs, we’d probably use computers for random assignment.
After a month, we measure health outcomes. Did the drug help more than the placebo? That’s what we find out!
To read about several examples of top-notch RCTs in more detail, read my following posts:
Common Elements for Effective RCT Designs
While randomization springs to mind when discussing RCTs, other equally vital components shape these robust experimental designs. Most well-designed randomized controlled trials contain the following elements.
- Control Group: Almost every RCT features a control group. This group might receive a placebo, no intervention, or standard care. You can estimate the treatment’s effect size by comparing the outcome in a treatment group to the control group. Learn more about Control Groups in an Experiment and controlling for the Placebo Effect.
- Blinding: Blinding hides group assignments from researchers and participants to prevent group assignment knowledge from influencing results. More on this shortly!
- Pre-defined Inclusion and Exclusion Criteria: These criteria set the boundaries for who can participate based on specifics like age or health conditions.
- Baseline Assessment: Before diving in, an initial assessment records participants’ starting conditions.
- Outcome Measures: Clear, pre-defined outcomes, like symptom reduction or survival rates, drive the study’s goals.
- Controlled, Standardized Environments: Ensuring variables are measured and treatments administered consistently minimizes external factors that could affect results.
- Monitoring and Data Collection: Regular checks guarantee participant safety and uniform data gathering.
- Ethical Oversight: Ensures participants’ rights and well-being are prioritized.
- Informed Consent: Participants must know the drill and agree to participate before joining.
- Statistical Plan: Detailing how statisticians will analyze the data before the RCT begins helps keep the evaluation objective and prevents p-hacking. Learn more about P-Hacking Best Practices.
- Protocol Adherence: Consistency is critical. Following the plan ensures reliable results.
- Analysis and Reporting: Once done, researchers share the results—good, bad, or neutral. Transparency builds trust.
These components ensure randomized controlled trials are both rigorous and ethically sound, leading to trustworthy results.
Common Variations of Randomized Controlled Trial Designs
Randomized controlled trial designs aren’t one-size-fits-all. Depending on the research question and context, researchers can apply various configurations.
Let’s explore the most common RCT designs:
- Parallel Group: Participants are randomly put into an intervention or control group.
- Crossover: Participants randomly receive both intervention and control at different times.
- Factorial: Tests multiple interventions at once. Useful for combination therapies.
- Cluster: Groups, not individuals, are randomized. For instance, researchers can randomly assign schools or towns to the experimental groups.
Learn more about Experimental Design: Definition and Types.
Blinding in RCTs
Blinding is a standard protection in randomized controlled trials. The term refers to procedures that hide group assignments from those involved. While randomization ensures initial group balance, it doesn’t prevent uneven treatment or assessment as the RCT progresses, which could skew results.
So, what is the best way to sidestep potential biases?
Keep as many people in the dark about group assignments as possible. In a blinded randomized controlled trial, participants, and sometimes researchers, don’t know who gets the intervention.
There are three types of blinding:
- Single: Participants don’t know if they’re in the intervention or control group.
- Double: Both participants and researchers are in the dark.
- Triple: Participants, researchers, and statisticians all don’t know.
It guards against sneaky biases that might creep into our RCT results. Let’s look at a few:
- Confirmation Bias: Without blinding in a randomized controlled trial, researchers might unconsciously favor results that align with their expectations. For example, they might interpret ambiguous data as positive effects of a new drug if they’re hopeful about its efficacy.
- Placebo Effect: Participants who know they’re getting the ‘real deal’ might report improved outcomes simply because they believe in the treatment’s power. Conversely, those aware they’re in the control group might not notice genuine improvements.
- Observer Bias: If a researcher knows which participant is in which group, they might inadvertently influence outcomes. Imagine a physiotherapist unknowingly encouraging a participant more because they know they’re receiving the new treatment.
Blinding helps keep these biases at bay, making our results more reliable. It boosts confidence in a randomized controlled trial. Let’s close by summarizing the benefits and disadvantages of an RCT.
The Benefits of Randomized Controlled Studies
Randomized controlled trials offer a unique blend of strengths:
- RCTs are best for identifying causal relationships.
- Random assignment reduces both known and unknown biases.
- Many RCT designs exist, tailored for different research questions.
- Well-defined steps and controlled conditions ensure replicability across studies.
- Internal validity tends to be high in a randomized controlled trial. You can be confident that other variables don’t affect or account for the observed relationship.
The Drawbacks of RCTs
While powerful, RCTs also come with limitations:
- Randomized controlled trials can be expensive in time, money, and resources.
- Ethical concerns can arise when withholding treatments from a control group.
- Random assignment might not be possible in some circumstances.
- External validity can be low in an RCT. Conditions can be so controlled that the results might not always generalize beyond the study.
Learn more about Internal and External Validity in Experiments and see how they’re a tradeoff. |
Most people will probably remember the times tables from primary school quizzes. There might be patterns in some of them (the simple doubling of the 2 times table) but others you just learnt by rote. And it was never quite clear just why it was necessary to know what 7 x 9 is off the top of your head.
Well, have no fear, there will be no number quizzes here.
Instead, I want to show you a way to build numbers that gives them some structure, and how multiplication uses that structure.
Multiplication simply gives you the area of a rectangle, if you know the lengths of the sides. Pick any square in the grid, (for example, let’s pick the 7th entry in the 5th row) and colour a rectangle from that square to the top left corner.
This rectangle has length 7 and height 5, and the area (the number of green squares) is found in the blue circle in the bottom right corner! This is true no matter which pair of numbers in the grid you pick.
Now let’s take this rectangle and flip it around the main diagonal (the red dotted line).
The length and height of the rectangle have swapped, but the area hasn’t changed. So from this we can see that 5 × 7 is the same as 7 × 5. This holds true for any pair of numbers — in mathematics we say that multiplication is commutative.
But this fact means that there is a symmetry in the multiplication table. The numbers above the diagonal line are like a mirror image of the numbers below the line.
So if your aim is to memorise the table, you really only need to memorise about half of it.
The building blocks of numbers
To go further with multiplication we first need to do some dividing. Remember that dividing a number just means breaking it into pieces of equal size.
12 ÷ 3 = 4
This means 12 can be broken into 3 pieces, each of size 4.
Since 3 and 4 are both whole numbers, they are called factors of 12, and 12 is said to be divisible by 3 and by 4. If a number is only divisible by itself and 1, it is called a prime number.
But there’s more than one way to write 12 as a product of two numbers:
12 × 1
6 × 2
4 × 3
3 × 4
2 × 6
1 × 12
In fact, we can see this if we look at the multiplication table.
The number of coloured squares in this picture tells you there are six ways you can make a rectangle of area 12 with whole number side lengths. So it’s also the number of ways you can write 12 as a product of two numbers.
Incidentally, you might have noticed that the coloured squares seem to form a smooth curve — they do! The curve joining the squares is known as a hyperbola, given by the equation a × b = 12, where ‘a’ and ‘b’ are not necessarily whole numbers.
Let’s look again at the list of products above that are equal to 12. Every number listed there is a factor of 12. What if we look at factors of factors? Any factor that is not prime (except for 1) can be split into further factors, for example
12 = 6 × 2 = (2 × 3) × 2
12 = 4 × 3 = (2 × 2) × 3
No matter how we do it, when we split the factors until we’re left only with primes, we always end up with two 2’s and one 3.
2 × 2 × 3
is called the prime decomposition of 12 and is unique to that number. There is only one way to write a number as a product of primes, and each product of primes gives a different number. In mathematics this is known as the Fundamental Theorem of Arithmetic.
The prime decomposition tells us important things about a number, in a very condensed way.
For example, from the prime decomposition 12 = 2 × 2 × 3, we can see immediately that 12 is divisible by 2 and 3, and not by any other prime (such as 5 or 7). We can also see that it’s divisible by the product of any choice of two 2’s and one 3 that you want to pick.
Furthermore, any multiple of 12 will also be divisible by the same numbers. Consider 11 x 12 = 132. This result is also divisible by 1, 2, 3, 4, 6 and 12, just like 12. Multiplying each of these with the factor of 11, we find that 132 is also divisible by 11, 22, 33, 44, 66 and 132.
It’s also easy to see if a number is the square of another number: In that case there must be an even number of each prime factor. For example, 36 = 2 × 2 × 3 × 3, so it’s the square of 2 × 3 = 6.
The prime decomposition can also make multiplication easier. If you don’t know the answer to 11 × 12, then knowing the prime decomposition of 12 means you can work through the multiplication step by step.
11 x 12
= 11 x 2 × 2 × 3
= ((11 x 2) × 2) × 3
= (22 × 2) × 3
= 44 × 3
If the primes of the decomposition are small enough (say 2, 3 or 5), multiplication is nice and easy, if a bit paper-consuming. Thus multiplying by 4 (= 2 x 2), 6 (= 2 x 3), 8 (= 2 x 2 x 2), or 9 (= 3 x 3) doesn’t need to be a daunting task!
For example, if you can’t remember the 9 times table, it doesn’t matter as long as you can multiply by 3 twice. (However this method doesn’t help with multiplying by larger primes, here new methods are required – if you haven’t seen the trick for the 11 times tables watch this video).
So the ability to break numbers into their prime factors can make complicated multiplications much simpler, and it’s even more useful for bigger numbers.
For example, the prime decomposition of 756 is 2 x 2 x 3 x 3 x 3 x 7, so multiplying by 756 simply means multiplying by each of these relatively small primes. (Of course, finding the prime decomposition of a large number is usually very difficult, so it’s only useful if you already know what the decomposition is.)
But more than this, prime decompositions give fundamental information about numbers. This information is widely useful in mathematics and other fields such as cryptography and internet security. It also leads to some surprising patterns – to see this, try colouring all multiples of 12 in the times table and see what happens. I’ll leave that for homework. |
Human rights in New Zealand
|This article is part of a series on the
politics and government of
Human rights in New Zealand are addressed in the various documents which make up the constitution. Specifically, the two main laws which protect human rights are the New Zealand Human Rights Act 1993 and the New Zealand Bill of Rights Act 1990. In addition, New Zealand has also ratified numerous international United Nations treaties. The 2009 Human Rights Report by the United States Department of State noted that the government generally respected the rights of individuals, but voiced concerns regarding the social status of the indigenous population.
- 1 History
- 2 International treaties
- 3 Legal system
- 4 Civil liberties
- 4.1 Freedom of speech
- 4.2 Right to a fair trial
- 4.3 Freedom of religion
- 4.4 Political rights
- 5 Economic, social and cultural rights
- 6 Indigenous peoples
- 7 Refugees
- 8 Human Rights Commission
- 9 Limits on human rights in New Zealand
- 10 Notes
- 11 References
- 12 External links
Universal suffrage for Māori men over 21 was granted in 1867, and extended to European males in 1879. In 1893, New Zealand was the first self-governing nation to grant universal suffrage; however, women were not eligible to stand for parliament until 1919.
A distinctive feature of New Zealand's electoral system is a form of special representation for Maori in parliament. Initially considered a temporary solution on its creation in 1867, this separate system has survived debate as to its appropriateness and effectiveness. Critics have described special representation as a form of apartheid. In 1992, when the Royal Commission on the Electoral System recommended the abolishment of the separate system, strong representations from Maori organisations resulted in its survival.
Human rights in New Zealand are addressed in the constitution. In addition, New Zealand has also ratified numerous international treaties as part of the United Nations. The 2009 Human Rights Report by the United States Department of State noted that the government generally respected the rights of individuals, but voiced concerns regarding the social status of the indigenous population.
In May 2009, for the first time New Zealand prepared a national Universal Periodic Review (UPR) at the United Nations Human Rights Council in Geneva, Switzerland. During this peer review process many countries praised New Zealand's human rights record and identified that the perception of New Zealand as a comparatively fair and equal society is crucial to its international reputation. Areas where the nation was directed to make improvements include disparities experienced by Maori as demonstrated by key social and economic indicators and the extent of family violence and violence against women and children.
This section requires expansion. (May 2010)
In 2009 New Zealand was seeking a position on the United Nations Human Rights Council. The bid was withdrawn in March of that year to allow a clear path for the United States to win the seat, after US President Barack Obama reversed his country's previous position that the council had lost its credibility. Then New Zealand Foreign Minister Murray McCully stated "We believe that US membership of the council will strengthen it and make it more effective... By any objective measure, membership of the council by the US is more likely to create positive changes more quickly than we could have hoped to achieve them."
In May 2009, for the first time New Zealand prepared a national Universal Periodic Review (UPR) at the United Nations Human Rights Council in Geneva, Switzerland. During this peer review process many countries praised New Zealand's human rights record and identified that the perception of New Zealand as a comparatively fair and equal society is crucial to its international reputation. Areas where the nation was directed to make improvements include disparities experienced by Māori as demonstrated by key social and economic indicators and the extent of family violence and violence against women and children.
|Treaty||Signed [nb 1]||Ratified|
|Convention on the Elimination of All Forms of Racial Discrimination||25 Oct 1966||22 Nov 1972|
|International Covenant on Civil and Political Rights||12 Nov 1968||28 Dec 1978|
|Convention on the Elimination of All Forms of Discrimination against Woman||17 Jul 1980||10 Jan 1985|
|Convention on the Rights of the Child[nb 2]||1 Oct 1990||6 Apr 1993|
|Convention against Torture and Other Cruel Inhuman or Degrading Treatment or Punishment||14 Jan 1986||10 Dec 1989|
|International Covenant on Economic, Social and Cultural Rights||12 Nov 1968||28 Dec 1978|
|Convention on the Rights of Persons with Disabilities||30 Mar 2007||28 Dec 1978|
This section requires expansion. (May 2010)
The legal system takes the framework of a parliamentary representative democratic monarchy. In the absence of a single constitution, various legislative documents such as the Constitution Act 1986, Imperial Laws Application Act 1988, New Zealand Bill of Rights Act 1990 and the Human Rights Act 1993 have been implemented to cover such areas.
Human rights in New Zealand have never been protected by any single constitutional document or legislation, and no single institution has been primarily responsible for enforcement. Because New Zealand's human rights obligations are not entrenched and are simply part of common law, Parliament can simply ignore them if it chooses. The Human Rights Commission has identified this constitutional arrangement as an area in need of action to identify opportunities for giving greater effect to human rights protections.
Section 7 reports
Section 7 of the Bill of Rights Act requires the Attorney-General to draw to the attention of Parliament the introduction of any Bill that is inconsistent with the Act. The Ministry of Justice, which prepares this advice for the Attorney-General, requires a minimum of two weeks to review the draft legislation. Here is a list of bills reported by the Attorney General as being inconsistent with the New Zealand Bill of Rights Act 1990.
The 2009 report by the U.S. Department of State noted that, "[t]he law provides for an independent judiciary, and the government generally respected judicial independence in practice". In recent years concerns have been expressed that New Zealand is not performing as well in regard to human rights as it used to. A study released in 2015, Fault Lines: Human rights in New Zealand said New Zealand's human rights legislation - the Bill of Rights Act and the Human Rights Act - "were problematic and didn't prevent the passing of other laws, which breach rights".
Freedom of speech
The right to freedom of speech is not explicitly protected by common law in New Zealand but is encompassed in a wide range of doctrines aimed at protecting free speech. An independent press, an effective judiciary, and a functioning democratic political system combine to ensure freedom of speech and of the press. In particular, freedom of expression is preserved in section 14 of the New Zealand Bill of Rights Act 1990 (BORA) which states that:
“Everyone has the right to freedom of expression, including the right to seek, receive, and impart information and opinions of any kind in any form”.
This provision reflects the more detailed one in Article 19 of the ICCPR. The significance of this right and its importance to democracy has been emphasised by the New Zealand courts. It has been described as the primary right without which the rule of law cannot effectively operate. The right is not only the cornerstone of democracy; it also guarantees the self-fulfilment of its members by advancing knowledge and revealing truth. As such, the right has been given a wide interpretation. The Court of Appeal has said that section 14 is “as wide as human thought and imagination”. Freedom of expression embraces free speech, a free press, transmission and receipt of ideas and information, freedom of expression in art, and the right to silence. The right to freedom of expression also extends to the right to seek access to official records. This is provided for in the Official Information Act 1982.
There are limitations on this right, as with all other rights contained in BORA.
“It would not be in society’s interests to allow freedom of expression to become a licence irresponsibly to ignore or discount other rights and freedoms”.
Under article 19(3) ICCPR, freedom of expression can be limited in order to:
- respect the rights and reputations of others; and
- protect national security, public order, or public health and morals.
Jurisprudence under BORA closely follows these grounds. Freedom of expression is restricted only so far as is necessary to protect a countervailing right or interest. The Court of Appeal has held that the restriction on free speech must be proportionate to the objective sought to be achieved; the restriction must be rationally connected to the objective; and the restriction must impair the right to freedom to the least possible amount. The right to freedom of expression may also be limited by societal values which are not in BORA, such as the right to privacy and the right to reputation.
Hate speech is prohibited in New Zealand under the Human Rights Act 1993 under sections 61 and 131. These sections give effect to article 20 ICCPR. These sections and their predecessors have rarely been used. They require the consent of the Attorney-General to prosecute. Incitement to racial disharmony has been a criminal offence since the enactment of the Race Relations Act 1971. Complaints about racial disharmony often concern statements made publicly about Maori-Pakeha relations and immigration, and comments made by politicians or other public figures regarding minority communities.
Freedom of the media
Freedom of the media is also recognised as an important democratic principle. New Zealand is ranked eighth on the Press Freedom Index 2010 and there tends to be strong legal, public and media comment where this right is infringed. Section 68 of the Evidence Act 2006 provides a qualified form of privilege for journalists who wish to protect the identity of their sources. The Court of Appeal has also laid down guidelines for the police when searching media premises for law enforcement reasons, so that their sources remain protected.
The Courts may order that publication of information be withheld in whole or part, in the interests of justice. Often this is to protect the right to a fair trial, to protect the interests of the parties, or to uphold public confidence in the integrity of the justice system. It is not uncommon for New Zealand Courts to suppress names and evidence in civil and criminal proceedings so as to protect the right to a fair trial.
"The law of New Zealand must recognise that in cases where the commencement of criminal proceedings is highly likely the Court has inherent jurisdiction to prevent the risk of contempt of Court by granting an injunction. But the freedom of the press and other media is not to be interfered with lightly and it must be shown that there is a real likelihood of a publication of material that will seriously prejudice the fairness of the trial".
The Broadcasting Act 1989 is a statute limiting the media’s right to freedom of expression. Broadcasters have a responsibility to maintain programme standards that are consistent with: the observation of good taste and decency, the maintenance of law and order, the privacy of the individual, the principle of balance when controversial issues of public importance are discussed, and approved code of broadcasting practice applying to programmes. The Broadcasting Standards Authority is a Crown Entity that hears complaints from the public where codes of practice have been breached. Print news media are self-regulated through the Press Council.
Right to a fair trial
A fair trial in New Zealand has been defined as “a court hearing that is procedurally just to both parties”; it is all encompassing for every citizen in New Zealand, and is pinnacle in the functionality of the justice system. The area this civil and criminal right has the most influence in is criminal procedures, however it still holds great influence in other realms of New Zealand law, such as administration law (due to the use of the Rule of Law). This essential right has been in practice from the early beginnings of New Zealand due to the continuation of English Law during its colonisation, and has continued to develop over the years with the international community.
The Magna Carta (1215) is seen as one of the earlier instruments to clearly set out the rights to a fair trial to all free men. It is applicable to New Zealand law due to it being listed in the Imperial Laws Application Act 1988, which allows a handful of English Status to be legally binding.
The important clause is Clause 39:
“No free man shall be seized or imprisoned, or stripped of his rights or possessions, or outlawed or exiled, or deprived of his standing in any other way, nor will we proceed with force against him, or send others to do so, except by the lawful judgement of his equals or by the law of the land.”
It was found to have paved the way for trial by jury, equality before the law, habeas corpus and a ban on arbitrary imprisonment; all rights that are within the shadow of the right to a fair trial.
Rule of law
The rule of law, found in every democratic society including New Zealand, is essentially the authority the law has on every citizen, regardless of their status. It has been defined as a doctrine that holds the law above all citizens in an equal manner, and even government officials are accountable to the ordinary courts of law.
The rule of law is a source of the right to a fair trial, as the doctrine protects the process of the court and national equality when considering the application of the law.
International covenants recognised by New Zealand
The International Covenant on Civil and Political Rights is the main international treaty which lays out the right to a fair trial. Article 14(1) says:
“All persons shall be equal before the courts and tribunals. In the determination of any criminal charge against him, or of his rights and obligations in a suit at law, everyone shall be entitled to a fair and public hearing by a competent, independent and impartial tribunal established by law. The press and the public may be excluded from all or part of a trial for reasons of morals, public order or national security in a democratic society, or when the interest of the private lives of the parties so requires, or to the extent strictly necessary in the opinion of the court in special circumstances where publicity would prejudice the interests of justice; but any judgement rendered in a criminal case or in a suit at law shall be made public except where the interest of juvenile persons otherwise requires or the proceedings concern matrimonial disputes or the guardianship of children"
New Zealand has also made a commitment to uphold the Universal Declaration of Human Rights (UDHR) and support the efforts of the Office of the United Nations High Commissioner for Human Rights (OHCHR), and has put in place the Human Rights Commission (Te Kahui Tangata) to ensure this.
In regards to the right to a fair trial, article 10 of the UDHR states:
“Everyone is entitled in full equality to a fair and public hearing by an independent and impartial tribunal, in the determination of his rights and obligations and of any criminal charge against him.”
New Zealand Bill of Rights Act 1990
It is thought that New Zealand passed the Bill of Rights Act to fulfil its obligations to the International Convention of Civil and Political Rights (ICCPR), as a state party, section 25 of this Act essential replicates Article 14 of the ICCPR.
"Section 25 Minimum of Criminal Procedure Everyone who is charged with an offence has, in relation to the determination of the charge, the following minimum rights: (a)the right to a fair and public hearing by an independent and impartial court: (b)the right to be tried without undue delay: (c)the right to be presumed innocent until proved guilty according to law: (d)the right not to be compelled to be a witness or to confess guilt: (e)the right to be present at the trial and to present a defence: (f)the right to examine the witnesses for the prosecution and to obtain the attendance and examination of witnesses for the defence under the same conditions as the prosecution: (g)the right, if convicted of an offence in respect of which the penalty has been varied between the commission of the offence and sentencing, to the benefit of the lesser penalty: (h)the right, if convicted of the offence, to appeal according to law to a higher court against the conviction or against the sentence or against both: (i)the right, in the case of a child, to be dealt with in a manner that takes account of the child's age.”
Criminal Procedure Act 2011
Although the Criminal Procedure Act 2011 does not initially set out the right to a fair trial, it can however be seen that the right to a fair trial is the key reason behind particular actions.
The following table lists some of the sections, where the right to a fair trial is essential for the courts to consider.
|Section||Contents||The link to Fair Trial|
|s18||Court ordering further particulars.||The court must be satisfied that it is necessary for a fair trial.|
|s197||The power to clear the court.||For a court to be cleared the court must be satisfied that there is “a real risk of prejudice to a fair trial”.|
|s200||The suppression of the defendant’s identity.||For suppression of the defendant’s identity to occur, the court must be satisfied that the publication of the name would “create a real risk of prejudice to a fair trial”.|
|s202||The suppression of the identities of any witnesses, victims, and connected persons.||The court may only do this if the court is satisfied that the publication of the identity would be likely to “create a real risk of prejudice to a fair trial”.|
|s205||The suppression of evidence and submissions.||Suppression may only occur when the court is satisfied that the publication would “create a real risk of prejudice to a fair trial”.|
|s232||Appeals||Appeals must be accepted when there has been any miscarriage of justice, which may include a result that is seen as an unfair trial.|
Right to a fair trial and the media
Aside from the limits of sections 4, 5 and 6 of the Bill of Rights, and New Zealand’s “unwritten” constitution, other rights may impede on the right to a fair trial where one right can overrule another. The best example is the relationship between freedom of speech and the right to a fair trial. These two rights are always conflicting, due to the nature of the media.
In New Zealand, there is a focus on finding a balance between the contrasting rights; courts focus on a balance between one person’s right and another’s. Although there is nothing expressly stating a hierarchy of rights, the court does in fact have the ability to limit one right so as to uphold another. In New Zealand there is full recognition of the importance of freedom of speech. However, it has been seen in numerous occasions, court have upheld the right to a fair trial over the freedom of speech through media.
It has been said that in the event of a conflict, if all other things are equal between the two rights, the right to fair trial should prevail. However it has been argued that greater tolerance should be given to freedom of speech when the issue involves something of “substantial public interest”. Overall, the freedom of the press and of speech is not a right to be lightly interfered with, and when interference happens it must be seen as a justified limitation, but also, if publication was to occur regarding the case, serious prejudice would arise.
Freedom of religion
Freedom of religion is addressed specifically in the New Zealand Bill of Rights Act 1990, and the government has generally respected this in practice.
New Zealand is a parliamentary democracy, and as such acquires rights generally associated with such a system. Democratic rights include electoral rights, the right for citizens to take part (directly or indirectly) in government, and the right to equal access to the public service. There is an associated duty of responsible citizenship, or being willing to play one’s part in public affairs and to respect the rights and freedoms of others. These rights give the ability to participate in both public and political life when considered together.
Political and democratic rights are purported to be upheld by the ‘unwritten’ Constitution of New Zealand. One of the many sources that make up the constitution is the New Zealand Bill of Rights Act 1990. This legislation was the first aspect of the New Zealand constitution to specially refer to the International Covenant on Civil and Political Rights (ICCPR) with the rights contained within. Together with the New Zealand Human Rights Act 1993, these two statutes make up a basis for Human Rights protection in New Zealand. They were not incorporated directly into the legal system, however many of the rights within the ICCPR were replicated in the Bill of Rights Act 1990. These include electoral rights under section 12, and freedom of association under section 17. The Human Rights Act 1993 also concerns non-discrimination based on political opinion under section 21.
There has been concern expressed that due to the nature of the New Zealand constitution, and the lack of full integration into the legal system, rights under the ICCPR are not sufficiently protected. The Bill of Rights Act 1990 is not entrenched legislation, and this means that it can effectively be overturned by a simple majority in Parliament. A counter to this concern is that rights do exist in the New Zealand constitution regardless; however it is the finding of them that is the difficult part.
Electoral rights include the right to vote in Members of Parliament, and the right to run for the House of Representatives. This is done by way of a secret ballot, and there is universal suffrage, with voting rights given to both men and women of the age 18 and over who are New Zealand citizens or permanent residents. Freedom of association allows people to join with other individuals into groups that express, promote, pursue and defend common interests collectively. The Electoral Act 1993 is also important because it is one of the few ‘constitutional’ documents to contain entrenched provisions. These maintain the rights to voting and the size of the electorates which represent 'the people'. In the New Zealand context, entrenching provisions is one of the most effective ways to protect rights, as there is no possibility of total protection due to the doctrine of Parliamentary Sovereignty. However entrenching provisions would appear to indicate intent to protect rights. Section 6 of the Bill of Rights Act provides for judicial interpretation in favour of right-protecting interests, which allows judges to interpret around provisions in other legislation that may appear to impede human rights.
This in itself had opposition, with arguments that allowing such a provision to exist undermines the doctrine of Parliamentary Sovereignty and impinged on the political rights of citizens as it allowed un-elected and non-representative judges to interpret rights somewhat at their discretion. The universality of rights under the Universal Declaration of Human Rights would then be threatened also under this critique, as those who could afford good lawyers would then be at a greater advantage. Whether this is true in practice has not been proven, however it was one of the biggest points of opposition to the Bill of Rights Act prior to its inception.
New Zealand context
The ICCPR also contains statements on all peoples having a right to self-determination. Part of this right to self-determination is the right to determine political status freely. International human rights standards recognise that democratic and political rights require the protection of a range of other rights and freedoms, including the right to justice, freedom of expression, the right to peaceful assembly and freedom of association contained in the ICCPR. They must also be enjoyed without discrimination. This is stated in the ICCPR (as well as the Convention on the Elimination of All Forms of Discrimination against Women) (CEDAW) and the Convention on the Elimination of Racial Discrimination (CERD). Both CEDAW and CERD provide with specificity that the State should take steps to ensure the equal representation and participation of women, and of all ethnic and racial groups, in political processes and institutions (Article 7 of CEDAW and Article 5c of CERD).
New Zealand portrays a system by which these political rights are maintained. Equal possibility for representation exists for any citizen, regardless of gender or race. In this respect, the democratic rights standard under the ICCPR (and other UN conventions) is fulfilled with women and minority groups being able to vote, and be elected to Parliament. For example, New Zealand has female Members of Parliament, as well as those in the Maori, Pacific Islander, Asian, homosexual and Muslim minorities. Maori political rights are further protected by giving Maori people the option to be on the General or Maori electoral roll, and by having reserved seats in the House of Representatives. This formula in turn projects the number of Maori electorates, General electorates and thus party list seats under the Mixed member proportional representation electoral system.
Citizens are also given a further ability to participate in the system and exercise some democratic rights by way of ‘citizen initiated referenda’(or citizen initiative). However these are not binding on Parliament and as such do not necessarily have a large degree of influence. It does however provide for assistance on public opinion for policy makers, and results can be taken into consideration when formulating Bills at various stages. Political and democratic rights are also protected under the Treaty of Waitangi, one of New Zealand's founding documents and a source of law under the unwritten constitution. Article 1 of the Treaty infers the right to govern in New Zealand being the basis for the Westminster system of government. The rights of Maori to govern their own affairs where necessary is inferred by Article 2, and the extent to which all New Zealanders are proportionately represented in the institutions of the State, and which New Zealanders participate in political processes such as voting is covered by Article 3.
Framework for political rights protection
Human rights and democracy are internationally recognised as interdependent and provide a framework for assessing the extent to which democratic rights are respected in law and practice. According to this framework, there are two key democratic principles. The principle of popular control is the right to a controlling influence over public decisions and decision-makers. The principle of political equality is the right to be treated with equal respect and as of equal worth in the context of such decisions. Recognition of the above principles requires a framework for guaranteed citizens' rights, a system of representative and accountable political institutions subject to popular authorisation, and active channeling of popular opinion and engagement with government by the people. Under this model, New Zealand recognises the political rights of its citizens in both law and practice. It does so by way of the Human Rights Commission, which provides a framework within the legal and political system; the ability to communicate and participate in the political system, and processes such as judicial review and complaints to the Office of the Ombudsman hold government and governmental departments accountable where necessary in order to maintain political rights.
See generally: Economic, social and cultural rights
On the 28th December 1978 New Zealand ratified the International Covenant on Economic, Social and Cultural Rights (ICESCR). Other international treaties which contain provisions concerning economic, social and cultural rights (ESCR) have also been ratified by New Zealand, such as the International Convention on the Elimination of All Forms of Racial Discrimination (CERD), the Convention on the Elimination of All Forms of Discrimination against Women (CEDAW), the Convention on the Rights of the Child (CRC) and the Convention on the Rights of Persons with Disabilities. However, ESCR are not specifically protected by New Zealand's human rights focused statutes, the New Zealand Bill of Rights Act 1990 or the Human Rights Act 1993. The New Zealand Bill of Rights Act 1990 is predominantly concerned with the protection of civil and political rights. Including ESCR in the New Zealand Bill of Rights Act 1990 was suggested by the parliamentary Justice and Law Reform Select Committee in 1988, but was rejected by the Government. Currently, ESCR are not considered justiciable in New Zealand because they affect policy and resource allocation considerations, matters for the New Zealand Government and Parliament to decide. Elements of various ESCR are protected by domestic legislation though. New Zealand has not ratified the Optional Protocol to the International Covenant on Economic, Social and Cultural Rights.
The right to an adequate standard of living
See generally: Right to an adequate standard of living
The right to an adequate standard of living comprises other ESCR, such as the rights to food, water and housing.
See generally: Right to food
Though New Zealand does not face the levels of poverty exhibited in developing countries, it is generally recognised that relative poverty does exist in New Zealand. Relative poverty occurs when members of a society fall below the living standards which prevail in the society in which they live. For example, in 2013 260,000 dependent children, aged from 0 to 17 years, lived in relative poverty. The realisation of the right to food has been aided by New Zealand charities. In the year 2013-2014, The Salvation Army provided 27,879 families with food parcels. KidsCan currently provides meals for 15,065 children per week. A Bill was introduced into the New Zealand Parliament in November 2012 to amend the Education Act 1989 to enable State-funded breakfast and lunch meals to be provided to students attending decile 1 and 2 schools, but it did not proceed beyond its first reading in March 2015.
See also: Water in New Zealand
Advocacy concerning the right to water in New Zealand is centred upon the privatisation of the supply of water to households and opposing a 'user pay' approach to water consumption. Local government organisations that provide water services to communities are required to maintain their capacity to meet obligations such as retaining ownership and control of water services in their district/region. A local government organisation is allowed to enter into contracts concerning any aspect of providing water services, but they will remain legally responsible to provide such services and develop policy on the matter. In January 2015 the New Zealand Māori Council proposed the allocation of water rights be administered through a national water policy and an associated commission. The Council's co-chair, Sir Eddie Durie, stated Māori have a 'senior right' to water in New Zealand, but their rights should not override what is good for the general public. The Human Rights Commission stated in 2012 there was increasing concern in New Zealand over the quality of drinking water, the effects of the agricultural industries' consumption of water, Treaty of Waitangi considerations regarding rights to and ownership of water, and access to water.
See generally: Right to housing
Discrimination in housing is contrary to the New Zealand Bill of Rights Act 1990, the Human Rights Act 1993 and the Residential Tenancies Act 1986. Housing affordability in regards to both the rental market and the property market is a social issue in New Zealand which has made access to housing difficult for even middle-class families In Lawson v Housing New Zealand, the applicant challenged the increase in rent to market levels for state housing provided by Housing New Zealand, (a state-owned enterprise), because it had adverse effects on the living standards of existing state housing tenants. Because the right to housing is not specifically incorporated into domestic legislation the Court rejected considering whether the Government had met its international obligations concerning this right, and said it was instead a matter on which international forums could judge the Government. In 2013, the Ministry of Business, Innovation and Employment stated in Christchurch, due to the loss of housing in the 2010 and 2011 Canterbury earthquakes, there was a shortfall of 7,100 homes. The Human Rights Commission stated in December 2013 that there was a shortage of rental, temporary and emergency accommodation in Christchurch. The Auckland Housing Accord is currently being implemented by the Auckland Council and central Government in order to hasten and increase the number of affordable houses built in Auckland, to combat the housing crisis affecting the city.
The right to health
See generally: Right to health
There is no explicit right to health in New Zealand. However, there is a statutory framework which has been implemented over several decades which provides for the administration of health care and services. This framework includes the New Zealand Public Health and Disability Act 2000, the Health and Disability Services (Safety) Act 2001, the Health Practitioners Competence Assurance Act 2003 and the Heath Act 1956. The Accident Compensation Act 2001 also provides no-fault insurance cover for personal injuries, administered by the Accident Compensation Corporation. The New Zealand Bill of Rights Act 1990 also protects the right to health through the right not to be subjected to medical or scientific experimentation, the right to refuse medical treatment and the right to freedom from discrimination. A publicly funded health system exists in New Zealand. District Health Boards decide what health services are to be funded in their region, based on national objectives and the specific needs of their locality, but this process has been criticised by commentators who claim it is not open and objective. The limited resources of the system were highlighted in Shortland v Northland Health Ltd, where a decision by medical professionals to discontinue a patient's dialysis treatment for resource allocation reasons was upheld, even though continued treatment would have saved the patients life. Poorer health outcomes for Māori and Pasikfika people continue to persist.
The right to education
See also: Education in New Zealand
The right to education is not expressly provided for in New Zealand domestic law, but the realisation of the right can be seen across various statutes, policies and administrative practices. Such statutes include the Education Act 1989, the Education Standards Act 2001 and the Private Schools Conditional Integration Act 1975. From the ages of 5 years to 18 years, a person has the right to free primary and secondary education. This right extends to people who have special educational needs. Citizens and residents of New Zealand must be enrolled at a registered school from their 6th birthday until their 16th birthday. In 2014, 95.9% of new school entrants had participated in early childhood education in the six months prior to starting primary school. 78.6% of 18-year-olds in 2013 had the equivalent of an NCEA Level 2 qualification or higher. The number of Māori and Pasifika students leaving school with a National Qualifications Framework qualification has increased from 2004 levels, but the number of 18-year-old Māori and Pasifika people with an NCEA Level 2 equivalent qualification or higher was less than that of European or Asian students in New Zealand. In 2008 the Secretary of the Ministry of Education acknowledged the link between economic and social factors and educational achievement, and that efforts to ensure that socio-economically disadvantaged children remained engaged in education needed to continue.
The right to work
See also: Labour rights in New Zealand
Elements of the right to work and the right to the enjoyment of just and favourable work conditions are protected by the Minimum Wage Act 1983, the Health and Safety in Employment Act 1992, the Employment Relations Act 2000 and the Holidays Act 2003. New Zealand has ratified 60 of the International Labour Organization's Conventions, with 51 in force and 9 having been denounced. Discrimination in regards to accessing employment is prohibited on the grounds of age (from 16 years), colour, disability, employment status, ethnic belief, ethnic or national origin, family status, marital status, political opinion, race, religious belief, sex (including childbirth and pregnancy) and sexual orientation. In Ministry of Health v Atkinson, the Court of Appeal held the Ministry of Health's policy that family members who provide support services for their disabled children were ineligible to be paid for such work was discriminatory on the basis of family status. However, the decision was overturned by the Public Health and Disability Amendment Act 2013. The Human Rights Commission states the country is making some progress in regards to the role of women in the workforce. Women remain underrepresented in areas of public life such as law, governance and corporate sector leadership. The gender pay gap in 2014 was 9.9 per cent. In 2013, the Employment Relations Act 2000 was amended to restrict workers' entitlements to paid breaks.
See also: Welfare in New Zealand
New Zealand has a history of providing various forms of social security. The system has been designed to assist people when they are, for example, ill, unemployed, injured and elderly. New Zealand's Ministry of Social Development both develops and implements social security policy. The Social Security Act 1964 provides for a three-tiered system of benefits:
- Benefits to those in need such as the elderly, solo parents, the ill and the unemployed,
- Supplementary assistance, which recognises that some people face unavoidable expenditure, for example, in the areas of childcare and accommodation, and
- Financial assistance that provides a 'safety net', such as the Emergency Benefit.
Those who have suffered an accidental personal injury may also be eligible for financial support under the Accident Compensation Act 2001. Discrimination in the social security system has been alleged though. In Child Poverty Action Group v Attorney-General, provisions in the Income Tax Act 2007 prohibited families who received income benefits or accident compensation from being eligible for tax credits, but such discrimination was found to be justified under section 5 of the New Zealand Bill of Rights Act 1990. Academics have stated that New Zealand takes a 'needs-based' approach to the administration of social security, as opposed to a 'rights-based' approach.
Concluding observations of the Committee on Economic, Social and Cultural Rights 2012
See generally: Committee on Economic, Social and Cultural Rights
The Committee on Economic, Social and Cultural Rights (CESCR) is a body consisting of 18 independent experts tasked with monitoring State parties' implementation of the ICESCR. New Zealand's efforts in implementing the ICESCR were last assessed and reported on by the CESCR in May 2012. This was New Zealand's third report from the CESCR. The Committee made several recommendations to New Zealand in order for the country to increase its protection of ESCR. Such recommendations included incorporating ESCR into the New Zealand Bill of Rights Act 1990 and enhancing the enjoyment of ESCR for Māori, Pasifika and people with disabilities. Other recommendations included the rights of Māori to land, water and other such resources being legislated for, altering legislation to effectively provide for equal pay, continuing to guarantee the right to safe and affordable water, strengthening action to discourage tobacco consumption (especially among Māori and Pasifika youth) and ensuring the right to housing for all is guaranteed by policies and legislation.
There are concerns regarding inequality between Māori and other ethnic groups, in terms of the disproportionate numbers of Māori people in the penitentiary system and on welfare support. The UN Committee on the Elimination of Racial Discrimination highlighted issues regarding the government handling of Māori land claims, suggesting that amendments should be made to the Treaty of Waitangi and the New Zealand Bill of Rights Act 1990.
Māori population on average run greater risks of many negative economic and social outcomes. Over 50% of Māori live in areas in the three highest deprivation deciles, compared with 24% of the rest of the population. Although Māori make up only 14% of the population, they make up almost 50% of the prison population. Other issues include higher unemployment-rates than the general population in New Zealand There are also issues regarding health, including higher levels of alcohol and drug abuse, smoking and obesity. Less frequent use of healthcare services mean that late diagnosis and treatment intervention lead to higher levels of morbidity and mortality in many manageable conditions, such as cervical cancer diabetes per head of population than Pākehā (non-Māori) Māori also have considerably lower life expectancies compared to non-Māori. In 2005-2007, at birth Māori male life expectancy was 70.4 years versus 79 years for non-Māori males (a difference of 8.6 years), while the life expectancy for Māori females was 75.1 years versus 83 years for non-Māori females (a difference of 7.9 years).
Others have voiced concern for the area of 'linguistic human rights', due to the degree of prejudice against the use of Māori language.
New Zealand is a party to the 1951 UN Convention Relating to the Status of Refugees and the 1967 protocol. In 2009, the government proposed an immigration bill which had provisions for passenger screening. In addition, the bill would permit the withholding of reasons for the denial of entry, and would deny the applicant access to judicial review. Such developments caused concern that the bill could lead to the possibility for prolonged detention.
Human Rights Commission
The primary watchdog for human rights in New Zealand is the Human Rights Commission. Its stated mission is to work "for a fair, safe and just society, where diversity is valued, human rights are respected, and everyone is able to live free from prejudice and unlawful discrimination." The body is a member of Asia Pacific Forum of National Human Rights Institutions and of the International Coordinating Committee of national human rights institutions.
In 2010 the Commission conducted a publicly available review of human rights in New Zealand in order to both identify the areas in which New Zealand does well, and where it could do better to combat persistent social problems. The 'report card' is an update of the Commissions' first report in 2004, and will lead its work for the next five years. The report notes steady improvements in New Zealand's human rights record since 2004, but also "the fragility of some of the gains and areas where there has been deterioration." In the report, the Commission identifies thirty priority areas for action on human rights in New Zealand under a number of sections: general; civil and political rights; economic, social and cultural rights; and rights of specific groups.
Limits on human rights in New Zealand
New Zealand Bill of Rights 1990
In part one of the Bill of Rights, under general provisions, there are clear warnings that any of the rights found in the Act are not supreme law and can fall to Acts inconsistent with any of the rights mentioned.
Section 4 states that were there is inconsistencies between Acts, the Bill of Rights will bow. Section 5 states that all rights and freedoms are subject to reasonable limits prescribed by law in a democratic society.
It is important to note that within the Act, there are still procedures in place to up hold all rights where possible. Section 6 of the Bill of Rights Act allows the Court interpret all other enactment’s meanings to be consistent with all rights. This section could perhaps be seen as an immediate remedy to any possible basic or unintentional inconsistencies with can take away an individual’s rights.
Section 7 of the Bill of Rights Act is also important for upholding human rights, as it creates the mechanism where the Attorney-General is obligated to report an inconsistencies to the Bill of Rights to parliament. This is a paramount section as it keeps the legislator accountable to uphold New Zealanders’ individual rights, but it also mitigates any unintentional breaches on any rights.
New Zealand's unwritten constitution
New Zealand is seen as one of the few countries in the world which does not have a physical document which acts as the state’s constitution. New Zealand’s unwritten constitution can be seen as a collective of many different acts, including the New Zealand Bill of Rights Act 1990. There are no entrenched Acts or Bills in New Zealand law, therefore the highest power is given to parliament. This therefore means that, if parliament has a majority vote, any piece of legislation can be overturned regardless of how much emphasis the court puts on it.
There has been criticism, over the years, in regards to this “unwritten constitution” and much encouragement from the international community to change this. The 2009 Universal Periodic Review on New Zealand, through the Human Rights Council, is a good demonstration of this. In this review concerns were expressed that, due to constitution, not entrenched, there was no overarching protection for human rights. Within the review multiple states expressed their concerns over the lack of protection human rights had, due to the constitutional framework; all states were seen to highly recommend New Zealand taking steps towards constitutional entrenchment, and therefore protected human rights. Aside from these issues brought up, the international community collectively commended New Zealand's work in upholding human rights, such as the amount of ratifications completed and the work with the Maori peoples.
There has been small glimmers of movement towards an entrenched and written constitution in the past few years. The “Constitutional Conversation” in 2013, a nationwide forum, was a select panel which considered what should be done, whilst also taking into consideration the views of the public. Nothing as of yet has come out of this. There is an opinion that it is not a question about “if” but of “when” the change will happen, as New Zealand is continually developing in its own individual identity.
- Unless otherwise indicated, the declarations and reservations were made upon ratification, accession or succession
- The instrument of ratification also specifies that "such ratification shall extend to Tokelau only upon notification to the Secretary-General of the United Nations of such extension
- "What are human rights?". The Human Rights Commission. Retrieved 11 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- 2009 U.S Dept of State Human Rights Report: New Zealand
- "Māori and the Vote". Elections.org.nz. Retrieved 10 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Byrnes, A, Connors, JF & L Bik (1997). Advancing the Human Rights of Women: Using International Human Rights Standards in Domestic Litigation. Commonwealth Secretariat. p. 192. ISBN 9780850925159. <templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Change in the 20th century - Maori and the vote". NZ History online. Retrieved 14 April 2008.
|last1=in Authors list (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "NZs National Universal Periodic Review (UPR) Report". Human Rights Commission. Retrieved 7 May 2008.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Wong, Gilbert. "NZ makes clear stand for human rights at UN". Human Rights Commission. Retrieved 11 May 11, 2009. Check date values in:
- , UN Treaty Collection: New Zealand
- "New Zealand drops Human Rights Council bid, steps aside for US". EarthTimes. Retrieved 31 Mar 2009.
|last1=in Authors list (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Human Rights in New Zealand 2010, Human Rights Commission<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- NZ slipping in human rights issues - report, Radio New Zealand, 2 April 2015
- New Zealand's human rights performance slipping, New Zealand Herald, 2 April 2015
- Andrew Butler and Petra Butler The New Zealand Bill of Rights Act: a commentary (LexisNexis NZ Ltd, Wellington, 2005) at 305
- R v Secretary of State for the Home Department, ex parte Simms 2 AC 115 at p125
- A Bill of Rights for New Zealand: A White Paper (New Zealand Parliament House of Representatives) 1985. AJHR. 6 , p 79.
- Moonen v Film and Literature Board of Review 2 NZLR 9 para 15
- Tipping J in Hosking v Runting 1 NZLR 1
- Andrew Butler and Petra Butler The New Zealand Bill of Rights Act: a commentary (LexisNexis NZ Ltd, Wellington, 2005) at 323
- Police v Geiringer [1990-1992] 1 NZBORR 331
- Moonen v Film and Literature Board of Review 2 NZLR 9
- TVNZ Ltd v Attorney-General 2 NZLR 641
- Television New Zealand Ltd v Solicitor-General 1 NZLR 1 (CA) at 3
- Peter Spiller Butterworths New Zealand Dictionary (7th ed, LexisNexis, Wellington, 2011) at 113.
- Seen through the large focus in the Criminal Procedure Act 2011, and the ICJ’s Trial Observation Manual of Criminal Proceedings (2009)
- Imperial Laws Application Act 1988
- The full text of the Magna Carta
- Fair Trial from a UK perspective
- Found through the use of it in legislation, such as the Supreme Court Act 2003, s3(2).
- Peter Spiller Butterworths New Zealand Dictionary (7th ed, LexisNexis, Wellington, 2011) at 272.
- BNZ v Savril Contractors Ltd 2 NZLR 475 (CA).
- New Zealand History: Universal Declaration of Human Rights
- Chris Gullivan "Reliability, Hearsay and the Right to a Fair Trial in New Zealand" in P Roberts and J Hunter (ed) Criminal Evidence and Human Rights (Hart Publishing Ltd, Oxford, 2012) at 327.
- The Criminal Procedure Act 2011
- Crown Law Office "Contempt and the Media: Constitutional Safeguard or State Censorship" (1 January 1998)
- New Zealand Bill of Rights Act 1990, section 5.
- Solicitor-General v Avon Ltd 1 NZLR 225, 230.
- Solicitor-General v Wellington Newspaper Ltd 1 NZLR 45.
- J F Borrows in "Freedom of the Press under the NZBORA" in Joseph: Essays on the Constitution (1995) 286 at 303.
- Solicitor-General v TVNZ 1 NZLR (CA).
- Human Rights Commission of New Zealand 'Human Rights in New Zealand today'
- New Zealand Bill of Rights Act 1990
- Human Rights Act 1993
- Elections New Zealand website 'Civil and political rights in New Zealand'
- Beetham, D. (2002). Democracy and human rights: Contrast and convergence. Seminar on the Interdependence between Democracy and Human Rights, Conference papers. Geneva: Office of the High Commissioner for Human Rights
- Ministry of Justice 'International Covenant on Economic, Social and Cultural Rights'.
- United Nations Human Rights 'Ratification Status for New Zealand'
- Natalie Baird and Diana Pickard "Economic, social and cultural rights: a proposal for a constitutional peg in the ground" NZLJ 289.
- Paul Hunt "Reclaiming Economic, Social and Cultural Rights (1993) 1 Waikato L Rev 141.
- Final Report of the Justice and Law Reform Committee on a White Paper on a Bill of Rights for New Zealand (1988) 1.8C, 3.
- See for example the New Zealand High Court decision of Lawson v Housing New Zealand 2 NZLR 474.
- Joss Opie "A Case for Including Economic, Social and Cultural Rights in the New Zealand Bill of Rights Act 1990" (2012) 43 VUWLR 471 at 482.
- Peter Hosking "Freedom from Poverty: The Right to an Adequate Standard of Living" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 112 at 113.
- Peter Hosking "Freedom from Poverty: The Right to an Adequate Standard of Living" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 112 at 115-116.
- UNESCO 'Poverty'.
- NZ Child & Youth Epidemiology Service 'Child Poverty Monitor 2014 Technical Report' at 12.
- The Salvation Army New Zealand, Fiji & Tonga Territory 'The Salvation Army Annual Report 2013-2014' at 5.
- KidsCan 'Food for Kids'
- New Zealand Parliament 'Education (Breakfast and Lunch Programmes in Schools) Amendment Bill'.
- Peter Hosking "Freedom from Poverty: The Right to an Adequate Standard of Living" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 112 at 125.
- Local Government Act 2002, section 130.
- Local Government Act 2002, section 136.
- Radio New Zealand 'Moves to reassure public over water rights'.
- Human Rights Commission 'Human Rights and Water' 2012, at 30.
- Human Rights Commission 'Monitoring Human Rights in the Canterbury Earthquake Recovery' 2013, at 58.
- Peter Hosking "Freedom from Poverty: The Right to an Adequate Standard of Living" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 112 at 119.
- Lawson v Housing New Zealand 2 NZLR 474.
- Lawson v Housing New Zealand 2 NZLR 474 at 498-499.
- Ministry of Business, Innovation and Employment 'Housing pressures in Christchurch: A Summary of the Evidence/2013', at 27.
- Human Rights Commission 'Monitoring Human Rights in the Canterbury Earthquake Recovery' 2013, at 59-61.
- Auckland Council 'Auckland Housing Accord'.
- Sylvia Bell "The Right to Health" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 90 at 94.
- Sylvia Bell "The Right Health" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 90 at 94.
- Sylvia Bell "The Right to Health" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 90 at 95.
- New Zealand Bill of Rights Act 1990, sections 10, 11 and 19.
- Gareth Morgan, Geoff Simmons and John McCrystal Health Cheque: The Truth We Should All Know about New Zealand's Public Health System (Public Interest Publishing, Auckland, 2009) at 144.
- Shortland v Northland Health Ltd 1 NZLR 433.
- Sylvia Bell "The Right to Health" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 90 at 96.
- Human Rights Commission 'Human Rights in New Zealand 2010', at 171-172.
- Education Act 1989, section 3.
- Education Act 1989, section 8.
- Education Act 1989, section 20(1).
- Statistics New Zealand 'Participation in early childhood education'.
- Statistics New Zealand '18-year-olds with higher qualifications'.
- Human Rights Commission 'Human Rights in New Zealand 2010', at 180.
- Ministry of Education 'State of Education in New Zealand 2008, at 2.
- International Labour Organization 'Ratifications for New Zealand'.
- Human Rights Act 1993, section 21.
- Ministry of Health v Atkinson 3 NZLR 456.
- Natalie Baird and Diana Pickard "Economic, social and cultural rights: a proposal for a constitutional peg in the ground" NZLJ 289 at 291
- Human Rights Commission 'New Zealand Consensus of Women's Participation 2012', at 2.
- Amanda Reilly "The Right to Work and Rights at Work" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 71 at 83.
- Ministry for Women 'Gender pay gap'.
- Employment Relations Act 2000, section 69ZD.
- Māmari Stephens, "The Right to Social Security" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 127 at 130-134.
- Māmari Stephens, "The Right to Social Security" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 127 at 140.
- Ministry of Social Development.
- Māmari Stephens "The Right to Social Security" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 127 at 134.
- Māmari Stephens "The Right to Social Security" in Margaret Bedggood and Kris Gledhill (eds) Law into Action: Economic, Social and Cultural Rights in Aotearoa New Zealand (Thomson Reuters, Wellington, 2011) 127 at 135.
- Child Poverty Action Group Incorporated v Attorney-General NZCA 402.
- Claudia Geiringer and Matthew Palmer, "Human Rights and Social Policy in New Zealand" (2007) 30 Soc Pol J of NZ.
- United Nations Human Rights 'Committee on Economic, Social and Cultural Rights'.
- Human Rights Commission 'New Zealand's International Obligations'.
- Concluding observations of the Committee on Economic, Social and Cultural Rights on the third periodic report of New Zealand 2012.
- Maori Health Web Page: Socioeconomic Determinants of Health - Deprivation. Retrieved 12 June 2007.
- "Over-representation of Maori in the criminal justice system" (PDF). Department of Corrections. September 2007. p. 4.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Department of Labour, NZ, Māori Labour Market Outlook
- Raeburn, J; Rootman I (1998). People-centred Health Promotion. John Wiley and Sons. pp. 106–109.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Diabetes in New Zealand - Models And Forecasts 1996 - 2011
- "Cultural linkage: treating Maori with alcohol and drug problems in dedicated Maori treatment programs". 32. Department of Psychological Medicine, Christchurch School of Medicine, New Zealand. Mar 1997: 415–24. PMID 9090803. Retrieved 29 September 2010.
|last1=in Authors list (help)<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Social Report: 2010". Ministry of Social Development. Retrieved 12 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Skutnabb-Kangas, T, Phillipson, R & Rannut M (1995). Linguistic Human Rights: Overcoming Linguistic Discrimination. Walter de Gruyter. pp. 209–213. <templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- Tracy Watkins. "NZ does U-turn on rights charter". Stuff.co.nz. Retrieved 11 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Human rights in New Zealand: Report 2009". Amnesty International. Retrieved 30 September 2010.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "About the Human Rights Commission". Human Rights Commission. Retrieved 11 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- "Human Rights in New Zealand 2010". Human Rights Commission. Retrieved 11 August 2011.<templatestyles src="Module:Citation/CS1/styles.css"></templatestyles>
- New Zealand Bill of Rights Act 1990, s4.
- New Zealand Bill of Rights Act 1990, s5.
- New Zealand Bill of Rights Act 1990, s6
- An example of the court using this section is R v Rangi 1 NZLR 385.
- New Zealand Bill of Rights Act 1990, s7
- PA Joseph Constitutional and Administrative Law in New Zealand (3rd ed, Brookers, Wellington, 2007), at 134.
- 2009 Universal Review Documentation
- See the original documents for further information
- Such as India, the Islamic Republic of Iran, Pakistan, Bangladesh and Turkey.
- The Constitutional Conversation
- Freedom in the World 2006 Report: New Zealand
- Amnesty International Report 2009: New Zealand
- Human Rights Commission report Human Rights in New Zealand 2010
- NZ Action Plan for Human Rights
- Office of the High Commissioner of Human Rights: New Zealand
- NZ Human Rights COmmission: International Covenant on Civil and Political Rights
- Human Rights Commission: The New Zealand Bill of Rights
- New Zealand Bill of Rights Legislation
- Other Fair Trial Authority
- Concluding observations of the CESCR New Zealand 2012.
- Human Rights Commission 'New Zealand Consensus of Women's Participation 2012'.
- Human Rights Commission 'Human Rights and Water' 2012. |
Militarism was a definitive feature in the life of many states of the twentieth century that influenced mentality and thinking of millions of people worldwide. It still has its legacy in the world today, affecting war-and-peace decisions made by politicians. Militarism is something that is very difficult to leave behind since it contributes to core beliefs of many leading countries. The term “militarism” can be defined as” "the belief that a country should maintain a strong military capability and be prepared to use it aggressively to defend or promote national interests” (Definition of Militarism). That belief dominated the minds of people, being the foundation of ideology that reined in many societies. Some countries were exceptionally strong in broadcasting that ideology and forming public opinion that later influenced their whole populations and reflected on every aspect of life.
Militarism in Germany and Japan manifested in very distinct forms. It influenced gender mentality. While men in Prussia were regarded as the ones to defend the country, women were expected to take care of homes. Military training was designed for boys, ideology of militarism stressed on patriarchal gender roles at home. Women were excluded from leadership roles. Their task was to give birth to more children and raise them for the Motherland (Chickering, 11). At the same time, similar tendencies were observed in Japan. All military training was targeted at men. Women were the last to receive education in military values (Smethurst, 43). However, their participation in the military affairs was broader than in Germany. Women were founders of various defense and patriotic associations. They were encouraged to join those organizations and by doing that they fulfilled their patriotic obligations.
Thus, this paper will undertake to investigate how German and Japanese militarism affected both nations and influenced the role of women in both societies.
Buy Militarism in Germany and Japan and its Legacy in Today's World essay paper online
The German Empire was born during the Franco-Prussian War in 1870. The result of this war were the emergence of a powerful German nation-state, the Imperial German Army became the most powerful military in Europe, and a major shift in the balance of power on the European continent. (Wikipedia) The imperial Germany brought many changes in Germany.
The most important change that imperial Germany brought was rapid economic and industrial development. After the Franco-Prussian War, a great amount of military reparations paid by France and the abundant iron ores in Alsace emboldened Germany to rapidly develop its industry. According to a book “The German idea of militarism”, author Stargardt saying that the construction of railways and lots of orders of military products by the bourgeoisie greatly drove the development of heavy industry; furthermore, Germany was more pleased to adopt and accept results of advanced science and technology from other countries owning to the late start of the industrial revolution. (Stargardt, 26) The decade after 1870s witnessed a leapfrogging development of German capitalist industry, whose developing speed was the third only following USA and Japan. (Stargardt, 137)
Since the limited domestic resources and small home market of Germany struggled to meet the demand for further development of the capitalist economy, the bourgeoisie desperately needed to explore new markets and find new material-producing areas and investment places, which drove German bourgeoisie to turn their eyes overseas, actively expanding its colonies abroad to meet development needs for its capitalism. As a late starter of capitalist country, Germany began its colonial expansion in 1880s. Nearly all seats had been taken when it was ready to take part in the feast of carving up the colonies as one of the imperialistic states. At that time, Germany only owned a colony of one million square miles, which was one ninth of England’s, and a third of France’s.
The expansion desire of the bourgeoisie was fueled by the improvement of the national strengthen, which also provided a strong material basis for them to achieve this ambition. Economic development became the fundamental motivation of foreign policies, impelling German bourgeoisie to seek the status of a world power and carve up the world market. Therefore, the rapid development of economy not only boosted the formation of German militarism, but also provided good material basis for its development.
As the Germany’s national strength improved in the nineteenth century, many people considered that Germany was getting increasingly stronger because of the implementation of militarism. Therefore, German social and economic growth led to redefining the nation’s attitude towards the army; army became the symbol of the German pride and intense nationalism. The soldiers were the professional custodians of national security and technicians of violence, as well as symbols of power and high social status. According to an article “Minerva Quarterly Report on Women and the Military” saying that, military at that time, was considered as “the guardian of national identity and state sovereignty condensed”. Additionally, it played an important role in “defining national interests and behaved as the only institution capable of protecting the society” (Minerava, 6)
The formation of German militarism and the expansion desire of the bourgeoisie led to the First World War which was a messy disaster later, not only having a deep effect on the Germany’s economy , but also on the changing of women’s role. Nationalism and militarism of Germany exploit gender ideology that “men are portrayed as warrior, chauvinistic, striving for power, driven by bravery, domination, competition, aggression and honor”. On the other hand, “women are portrayed as emotional, domestic, committed, supportive and passive.” Militarist ideology rests on the gendered propaganda that men’s duty is to protect women and women need to be protected by men. (Minerava, 7)Want an expert to write a paper for you Talk to an operator now
According to an article “Germany-- women status” by Jone Johnson Lewis, he mentioned that before the World War II, Germany had been encouraging women to stick to their traditional role in families; the law of Germany prescribed that women shall be fully responsible for all housework, and cannot work unless there is no conflicts between work and marriage and family obligations, which clearly showed that the government discouraged women participating in work. However, the war made many German men in working ages get killed or become prisoners, thus Germany was in shortage of male labors and a lot of constructions were needed. In order to fast recover its national economy, the government encouraged women to have half-day social work. More and more German women threw over the traditional role of assisting their husband and bringing up children, and found themselves a full-time job as men did. As a result, German women’s traditional role was overthrown, and their social status was highly improved after the Second World War.
Militarism and fascism can usually be found in a same country. Japanese militarism is the mixture of the two. Japan’s the deep tradition of feudalism and autarchy enabled Japan to instill the idea of invasion into its people’s mind, and carry out the general policy of putting military affairs in industrial society, military and foreign relation, thus to build its state system of militarism.
Militarism had deep influences in the establishment of Japan’s modern industry and the identity of Japanese people. In order to establish the state system, the Japanese government gave priority to the development of armories, boatyards and railway construction in 1870. The government also amended the law of population registration to implement the newly released enlistment order. During the time of the government controlled by shoguns who were the generals in the army, Japanese peasants and the ragtag had no family names, and a same name might be used by many persons in a same village, which brought difficulties to conscription. The new law of population registration required its people to have surnames, thus facilitated conscription and tax payments in commodities. (Bowen) Therefore, we can see the establishment of Japan’s modern industry and even the identity of its people were driven by militarism.
When Japan’s military force was unprecedentedly powerful during the Meiji era from September 1868 to July 1912, the Meiji government started to vigorously agitate wars and continued to improve its military force with the ill gains depredated from wars. For example, the Japanese government used 62.8% war indemnity from China in arms expansion, making it the fourth great power in naval force following UK, France and Russia.( Zhuo,2005) In addition, masculine identity made the military’s task easier, but it was also another factor pulling in the direction of potentially greater violence later. The late nineteenth-century identification of males with strength, violence and decisiveness gave individual soldiers a powerful, personal incentive to conform to military ideals and do their duty to the point of massacre and gratuitous destruction. (Isabel, 101) The unsound development of military has pushed Japan’s militarism on the track of aggression and expansion.
During the 300 years before the Sino-Japanese War of 1894-1895, Toyotomi Hideyoshi, Premier Minister of Japan, planned to conquer China based on North Korea. In July 25, 1927, Premier Minister Tanaka Giichi clearly proposed five-steps to conquer the whole world in his secret memorial to the throne: First, Seize Taiwan; second, capture North Korea; third, hold Manchouli as well as Inner Mongolia and Outer Mongolia; fourth, take China; lastly, conquer the entire world. (Zhuo, 2005) The main purpose of this plan was to transfer the whole country from the small island to a continent, which later became known as the “continental” strategy that has been encouraging the Japanese rulers for years and the unswerving aim of Japan’s rulers through the ages. This plan also put forward fourteen concrete implementation plans in order to rob China’s resources.
In 1920, military extremists started to take control of Japan’s foreign policy. Since China was torn by revolutions, the Japanese viewed this country and especially its resource-rich region of Manchuria in northeast Asia as a target for expansion. In the 1920s, the military extremists became distrustful of the civilian government and began to oust civilians from all offices. The Kwantung Army, stationed on the Kwantung Peninsula (Southern Manchuria), was run by extremist officers who had plans of seizing the whole territory of the region. Finally, they organized an incident in order to justify the invasion. A bomb was exploded on the track of the Japanese-owned South Manchuria Railway. Although it caused little damage and no casualties, it was used as a reason to bring in troops to “protect” the railroad. This aggression was followed by annexing other Chinese provinces and occupying French, English and Southeast Asian colonies. Later developments led to signing a pact with Germany and Japan’s entry into World War II. (Causton, 436)
Militarism’s Violent Legacy in Today’s World
After the Second World War, all combatants had more or less some problems left over, such as territory issues, war criminals investigation, war reparation between governments, as well as damage indemnities of private associations and individuals. However, the problems of “comfort women” and raping left over from the aggressive war launched by Japan against China are an impediment to relations of China and Japanese that cannot easily be conquered since there are many debates between both countries.
One of the remaining problems is from the massacre, in which Chinese women were sexually abused by Japanese soldiers. According to a Anthropology professor Ishida Yuji describe in his article, some of them were raped by one or many Japanese soldiers in turn; some others were forced to be comfort women, sexual slaves of Japanese soldiers. 80,000 women only in Nanjing city of China were sexually assaulted during the months of the Nanjing Massacre, according to the judgment of Far East International Military Court of Justice and the investigation conclusion made by Nanjing Investigation Committee of Enemy’s Crimes. Some of them, including 8-year-old girls and old women over 70 years old, not only have been raped, but also have been cut on the breasts, paunches and finally killed. (Wikipedia) The second problem is the comfort women. The Japanese government made the system of comfort women, because they concerned that the army might lose battle effectiveness because of the wide spreading of social diseases resulting from rapes, and the large-scale raping would have bad influences on military disciplines and social security, which was unfavorable for long-term ruling of China.
The damage of biological weapon is another important remaining problem. Hundreds of victims of the biological wars in Zhejiang province of China handed up a joint indictment to the Japanese embassy in October, 1922, requiring Japan to pay relevant compensations, and in the same time sent representatives to bring the suit to the local court of Tokyo, requiring the Japanese government to pay the compensations, and also submitted a petition to Hashimoto, Premier Minister of Japan, requesting the government to make the data of the biological wars public. During the war, Japan violated the treaty of forbidding biological weapons, establishing biological troops, preparation shops, human labs and shooting ranges in the zones occupied by them and launching biological wars secretly. Japan’s Kwantung army conducted the research on biological wars in northeast China after the September 18th Incident, and left a large amount of biological products as well as fleas and mice with viruses in China after they were defeated, as a result of which, some places of China still witnessed infections years later.
Militarism played an important role both in Germany and in Japan. It manifested very distinctly, affecting all the society classes. The most rapid change that occurred in the German Empire was its economic growth, which was accompanied by industrial development. Germany had limited domestic resources and small home market and strove to expand its influence. It began its colonial expansion in 1880s. Becoming stronger, Germany sought to assert itself as a world power. Rapid development of capitalism contributed to that, boosting the formation of the nation’s militarism. As militaristic ideas got rooted in the society, the army became the symbol of German pride and nationalism. It began to play an important role in defining national interests. Nationalism and militarism used the gender ideology when men were portrayed as a brave chauvinistic class, striving for power and women were shown as domestic, passive and supportive. Before World War II women were encouraged to take care of their homes and raise children but after the beginning of the war when men were enlisted in the army or killed, women became more and more involved in work and their traditional role changed.
Japan’s militarism was deeply rooted in people’s minds. The country changed its laws in order to give priority to the army. The government used 62% of military reparations paid by China to develop its armed forces and weapons. Unhealthy expansion of the Japanese militarism pushed the country to the track of aggression, which was fully supported by the population. Japan invaded China and Korea, annexed parts of their territories and began to exploit China’s natural resources. Besides the acts of military aggression, Japanese soldiers committed many war crimes against civilians that included rape, murders and physical abuse. They also used biological weapons against the Chinese people.
As seen from the examples of those countries, militarism permeated both cultures and deeply affected the two nations. Their rapidly developing economies required new markets and resources that caused them to look for their national interests outside of their borders. The population of both Germany and Japan acted in support of their armies, thus, entrusting them their future. The spirit of militarism influenced both men and women, who had their clearly defined obligations under the existing circumstances.
What Our Customers Say
Check our customer feedback to ensure that we offer top-notch papers |
A sample-return mission is a spacecraft mission with the goal of collecting and returning samples from an extraterrestrial location to Earth for analysis. Sample-return missions may bring back merely atoms and molecules or a deposit of complex compounds such as loose material ("soil") and rocks. These samples may be obtained in a number of ways, such as soil and rock excavation or a collector array used for capturing particles of solar wind or cometary debris.
To date, samples of Moon rock from Earth's Moon have been collected by robotic and crewed missions, the comet Wild 2 and the asteroid 25143 Itokawa have been visited by a robotic spacecraft which returned samples to Earth, and samples of the solar wind have been returned by the robotic Genesis mission.
In addition to sample-return missions, samples from three identified non-terrestrial bodies have been collected by means other than sample-return missions: samples from the Moon in the form of Lunar meteorites, samples from Mars in the form of Martian meteorites, and samples from Vesta in the form of HED meteorites.
Samples available on Earth can be analyzed in laboratories, so we can further our understanding and knowledge as part of the discovery and exploration of the Solar System. Until now many important scientific discoveries about the Solar System were made remotely with telescopes, and some Solar System bodies were visited by orbiting or even landing spacecraft with instruments capable of remote sensing or sample analysis. While such an investigation of the Solar System is technically easier than a sample-return mission, the scientific tools available on Earth to study such samples are far more advanced and diverse than those that can go on spacecraft. Analysis of samples on Earth allows to follow up any findings with different tools, including tools that have yet to be developed; in contrast, a spacecraft can carry only a limited set of analytic tools, and these have to be chosen and built long before launch.
Samples analyzed on Earth can be matched against findings of remote sensing, for more insight into the processes that formed the Solar System. This was done, for example, with findings by the Dawn spacecraft, which visited the asteroid Vesta from 2011 to 2012 for imaging, and samples from HED meteorites (collected on Earth until then), which were compared to data gathered by Dawn. These meteorites could then be identified as material ejected from the large impact crater Rheasilvia on Vesta. This allowed deducing the composition of crust, mantle and core of Vesta. Similarly some differences in composition of asteroids (and, to a lesser extent, different compositions of comets) can be discerned by imaging alone. However, for a more precise inventory of the material on these different bodies, more samples will be collected and returned in the future, to match their compositions with the data gathered through telescopes and astronomical spectroscopy.
One further focus of such investigation—besides the basic composition and geologic history of the various Solar System bodies—is the presence of the building blocks of life on comets, asteroids, Mars or the moons of the gas giants. Several sample-return missions to asteroids and comets are currently in the works. More samples from asteroids and comets will help determine whether life formed in space and was carried to Earth by meteorites. Another question under investigation is whether extraterrestrial life formed on other Solar System bodies like Mars or on the moons of the gas giants, and whether life might even exist there. The result of NASA's last "Decadal Survey" was to prioritize a Mars sample-return mission, as Mars has a special importance: it is comparatively "nearby", might have harbored life in the past, and might even continue to sustain life. Jupiter's moon Europa is another important focus in the search for life in the Solar System. However, due to the distance and other constraints, Europa might not be the target of a sample-return mission in the foreseeable future.
Planetary protection aims to prevent biological contamination of both the target celestial body and the Earth—in the case of sample-return missions. No sample has yet been returned with alien life in it. A sample-return from Mars or other location with potential to host life, is a category V mission under COSPAR which directs to containment of any unsterilized sample returned to Earth. This is because it is unknown the effects of such hypothetical life would be on humans or on the biosphere of Earth. For this reason, Carl Sagan and Joshua Lederberg argued in the 1970s that we should do sample-return missions classified as category V missions with extreme caution, and later studies by the NRC and ESF agreed.
The Apollo program returned over 382 kg (842 lb) of lunar rocks and regolith (including lunar 'soil') to the Lunar Receiving Laboratory in Houston. Today, 75% of the samples are stored at the Lunar Sample Laboratory Facility built in 1979. In July 1969, Apollo 11 achieved the first successful sample return from another Solar System body. It returned approximately 22 kilograms (49 lb) of Lunar surface material. This was followed by 34 kilograms (75 lb) of material from Apollo 12, 42.8 kilograms (94 lb) of material from Apollo 14, 76.7 kilograms (169 lb) of material from Apollo 15, 94.3 kilograms (208 lb) of material from Apollo 16, and 110.4 kilograms (243 lb) of material from Apollo 17.
One of the most significant advances in sample-return missions occurred in 1970 when the robotic Soviet mission known as Luna 16 successfully returned 101 grams (3.6 oz) of lunar soil. Likewise, Luna 20 returned 55 grams (1.9 oz) in 1974, and Luna 24 returned 170 grams (6.0 oz) in 1976. Although they recovered far less than the Apollo missions, they did this fully automatically. Apart from these three successes, other attempts under the Luna programme failed. The first two missions were intended to outstrip Apollo 11 and were undertaken shortly before them in June and July 1969: Luna E-8-5 No. 402 failed at start, and Luna 15 crashed on the Moon. Later, other sample-return missions failed: Kosmos 300 and Kosmos 305 in 1969, Luna E-8-5 No. 405 in 1970, Luna E-8-5M No. 412 in 1975 at unsuccessful launches, and Luna 18 in 1971 and Luna 23 in 1974 at unsuccessful landings on the Moon.
In 1970, the Soviet Union planned for a 1975 first Martian sample-return mission in the Mars 5NM project. This mission was planned to use an N1 rocket, but as this rocket never flew successfully, the mission evolved into the Mars 5M project, which would use a double launch with the smaller Proton rocket and an assembly at a Salyut space station. This Mars 5M mission was planned for 1979, but got canceled in 1977 due to technical problems and complexity; all hardware was ordered destroyed.
New missions after a 20-year hiatusEdit
The Earth-Orbital Debris Collection (ODC) experiment was deployed on the Mir space station for 18 months during 1996–97 and used aerogel to capture particles from low Earth orbit, consisting of interplanetary dust and man-made particles. Far from being "the last sample-return mission... in... twenty years", ODC was a portable version of an LDEF collector, decreasing collection time significantly, and effective area by orders of magnitude.
The next mission to return extraterrestrial samples was the Genesis mission, which returned solar wind samples to Earth from beyond Earth orbit in 2004. Unfortunately, the Genesis capsule failed to open its parachute while re-entering the Earth's atmosphere and crash-landed in the Utah desert. There were fears of severe contamination or even total mission loss, but scientists managed to save many of the samples. They were the first to be collected from beyond lunar orbit. Genesis used a collector array made of wafers of ultra-pure silicon, gold, sapphire, and diamond. Each different wafer was used to collect a different part of the solar wind.
Genesis was followed by NASA's Stardust spacecraft, which returned comet samples to Earth on January 15, 2006. It safely passed by Comet Wild 2 and collected dust samples from the comet's coma while imaging the comet's nucleus. Stardust used a collector array made of low-density aerogel (99% of which is empty space), which has about 1/1000 of the density of glass. This enables the collection of cometary particles without damaging them due to high impact velocities. Particle collisions with even slightly porous solid collectors would result in destruction of those particles and damage to the collection apparatus. During cruise, the second side of the array collected at least seven interstellar dust particles.
In June 2010 the Japan Aerospace Exploration Agency (JAXA) Hayabusa probe returned asteroid samples to Earth after a rendezvous with (and a landing on) S-type asteroid 25143 Itokawa. In November 2010, scientists at the agency confirmed that, despite failure of the sampling device, the probe retrieved micrograms of dust from the asteroid, the first brought back to Earth in pristine condition.
The Russian Fobos-Grunt was a failed sample-return mission designed to return samples from Phobos, one of the moons of Mars. It was launched on November 8, 2011, but failed to leave Earth orbit and crashed after several weeks into the southern Pacific Ocean.
The Japan Aerospace Exploration Agency (JAXA) launched the improved Hayabusa2 space probe on December 3, 2014 and plans to return asteroid samples by 2020. Hayabusa2 arrived at the target asteroid C-type asteroid 162173 Ryugu (formerly designated 1999 JU3) on 27 June 2018. It is expected to survey the asteroid, which is a near-Earth asteroid, for a year and a half during which time it will collect samples multiple times, depart in December 2019, and return the samples to Earth in December 2020.
The OSIRIS-REx mission was launched in September 2016 on a mission to return samples from the asteroid 101955 Bennu. The samples are expected to enable scientists to learn more about the time before the birth of the Solar System, initial stages of planet formation, and the source of organic compounds that led to the formation of life. The sample will be collected with the TAGSAM, a robotic arm with a specialized collector head that will deposit the sample into an Earth return capsule.
JAXA is developing the MMX mission, a sample-return mission to Phobos that will be launched in 2024. Of the two moons, Phobos's orbit is closer to Mars and its surface may have adhered particles blasted from the red planet; thus the Phobos samples collected by MMX may contain material originating from Mars itself.
NASA has long planned a Martian sample-return mission, but has yet to secure the budget to successfully design, build, launch, and land such a probe. The mission remained on NASA's roadmap for planetary science as of the 2013 Planetary Science Decadal Survey.
China is planning to conduct a Chang'e 5 lunar sample return around 2019. If successful, it would mark the first lunar sample return in over 40 years. Russia has plans for Luna-Grunt mission to return samples from the Moon by 2021 and Mars-Grunt to return samples from Mars 5–10 years later. Also, Russia plans to repeat the Fobos-Grunt mission near 2024.
Methods of sample returnEdit
Sample-return methods include, but are not restricted to the following:
A collector array may be used to collect millions or billions of atoms, molecules, and fine particulates by using a number of wafers made of different elements. The molecular structure of these wafers allows the collection of various sizes of particles. Collector arrays, such as those flown on Genesis, are ultra-pure in order to ensure maximal collection efficiency, durability, and analytical distinguishability.
Collector arrays are useful for collecting tiny, fast-moving atoms such as those expelled by the Sun through the solar wind, but can also be used for collection of larger particles such as those found in the coma of a comet. The NASA spacecraft known as Stardust implemented this technique. However, due to the high speeds and size of the particles that make up the coma and the area nearby, a dense solid-state collector array was not viable. As a result, another means for collecting samples had to be designed as to preserve the safety of the spacecraft and the samples themselves.
Aerogel is a silica-based porous solid with a sponge-like structure, 99.8% of whose volume is empty space. Aerogel has about 1/1000 of the density of glass. An aerogel was used in the Stardust spacecraft because the dust particles the spacecraft was to collect would have an impact speed of about 6 km/s. A collision with a dense solid at that speed could alter their chemical composition or perhaps vaporize them completely.
Since the aerogel is mostly transparent, and the particles leave a carrot-shaped path once they penetrate the surface, scientists can easily find and retrieve them. Since its pores are on the nanometer scale, particles, even ones smaller than a grain of sand, do not merely pass through the aerogel completely. Instead, they slow to a stop and then are embedded within it.
The Stardust spacecraft has a tennis-racket-shaped collector with aerogel fitted to it. The collector is retracted into its capsule for safe storage and delivery back to Earth. Aerogel is quite strong and easily survives both launching and outer-space environments.
Robotic excavation and returnEdit
Some of the most risky and difficult types of sample-return missions are those that require landing on an extraterrestrial body such as an asteroid, moon, or planet. It takes a great deal of time, money, and technical ability in order to even initiate such plans. It is a difficult feat that requires that everything from launch to landing to retrieval and launch back to Earth is planned out with high precision and accuracy.
This type of sample return, although having the most risks, is the most rewarding for planetary science. Furthermore, such missions carry a great deal of public outreach potential, which is an important attribute for space exploration when it comes to public support. The only successful robotic sample-return missions of this type have been the Soviet Luna landers.
List of missionsEdit
|Launch date||Operator||Name||Sample origin||Samples returned||Recovery date||Mission result|
|16 July 1969||United States||Apollo 11||Moon||22 kilograms (49 lb)||24 July 1969||Successful|
|14 November 1969||United States||Apollo 12||Moon||34 kilograms (75 lb)||24 November 1969||Successful|
|11 April 1970||United States||Apollo 13||Moon||—||17 April 1970||Failed|
|31 January 1971||United States||Apollo 14||Moon||43 kilograms (95 lb)||9 February 1971||Successful|
|26 July 1971||United States||Apollo 15||Moon||77 kilograms (170 lb)||7 August 1971||Successful|
|16 April 1972||United States||Apollo 16||Moon||95 kilograms (209 lb)||27 April 1972||Successful|
|7 December 1972||United States||Apollo 17||Moon||111 kilograms (245 lb)||19 December 1972||Successful|
|22 March 1996|| United States /
|Earth-Orbital Debris Collection||Low Earth orbit||Particles||6 October 1997||Successful|
|14 April 2015|| Japan /
|Tanpopo mission||Low Earth orbit||Particles||February 2018||Successful|
|Launch date||Operator||Name||Sample origin||Samples returned||Recovery date||Mission result|
|14 June 1969||Soviet Union||Luna E-8-5 No. 402||Moon||Failure|
|13 July 1969||Soviet Union||Luna 15||Moon||Failure|
|23 September 1969||Soviet Union||Kosmos 300||Moon||Failure|
|22 October 1969||Soviet Union||Kosmos 305||Moon||Failure|
|6 February 1970||Soviet Union||Luna E-8-5 No. 405||Moon||Failure|
|12 September 1970||Soviet Union||Luna 16||Moon||101 grams (3.6 oz)||24 September 1970||Success|
|2 September 1971||Soviet Union||Luna 18||Moon||Failure|
|14 February 1972||Soviet Union||Luna 20||Moon||55 grams (1.9 oz)||25 February 1972||Success|
|2 November 1974||Soviet Union||Luna 23||Moon||Failure|
|16 October 1975||Soviet Union||Luna E-8-5M No. 412||Moon||Failure|
|9 August 1976||Soviet Union||Luna 24||Moon||170 grams (6.0 oz)||22 August 1976||Success|
|7 February 1999||United States||Stardust||81P/Wild||Particles||15 January 2006||Success|
|8 August 2001||United States||Genesis||Solar wind||Particles||9 September 2004||Success (partial)|
|9 May 2003||Japan||Hayabusa||25143 Itokawa||Particles||13 June 2010||Success (partial)|
|8 November 2011||Russia||Fobos-Grunt||Phobos||Failure|
|3 December 2014||Japan||Hayabusa 2||162173 Ryugu||December 2020||Ongoing|
|8 September 2016||United States||OSIRIS-REx||101955 Bennu||24 September 2023||Ongoing|
|December 2019||China||Chang'e 5||Moon||2020||Planned|
- What did Dawn learn at Vesta? The Planetary Society.
- Joshua Lederberg Parasites Face a Perpetual Dilemma (PDF). Volume 65, Number 2, 1999 / American Society for Microbiology News 77.
- Assessment of Planetary Protection Requirements for Mars Sample Return Missions (Report). National Research Council. 2009.
- Preliminary Planning for an International Mars Sample Return Mission Report of the International Mars Architecture for the Return of Samples (iMARS) Working Group June 1, 2008.
- European Science Foundation – Mars Sample Return backward contamination – Strategic advice and requirements Archived 2016-06-02 at the Wayback Machine July, 2012, ISBN 978-2-918428-67-1 – see Back Planetary Protection section. (for more details of the document see abstract).
- Mars Sample Return: Issues and Recommendations. Task Group on Issues in Sample Return. National Academies Press, Washington, DC (1997).
- "NASA Lunar Sample Laboatory Facility". NASA Curation Lunar. NASA. September 1, 2016. Retrieved February 15, 2017.
A total of 382 kilograms of lunar material, comprising 2200 individual specimens returned from the Moon...
- Orloff 2004, "Extravehicular Activity"
- Chaikin, Andrew (2007). A Man On the Moon: The Voyages of the Apollo Astronauts (Third ed.). New York: Penguin Books. pp. 611–613.
- Kristen Erickson (July 16, 2009). Amiko Kauderer (ed.). "Rock Solid: JSC's Lunar Sample Lab Turns 30". 40th Anniversary of Apollo Program. NASA. Retrieved June 29, 2012.
- Wade, Mark. "Luna Ye-8-5". Encyclopedia Astronautica. Retrieved 27 July 2010.
- Советский грунт с Марса (in Russian) Archived April 8, 2010, at the Wayback Machine
- Westphal, A.; Stroud, R.; et al. (15 Aug 2014). "Evidence for interstellar origin of seven dust particles collected by the Stardust spacecraft". Science. 345 (6198): 786–91. Bibcode:2014Sci...345..786W. doi:10.1126/science.1252496. hdl:2381/32470. PMID 25124433.
- Amos, Jonathan (November 16, 2010). "Japan probe collected particles from Itokawa asteroid". BBC News. Retrieved November 16, 2010.
- Emily Lakdawalla (January 13, 2012). "Bruce Betts: Reflections on Phobos LIFE". The Planetary Society Blog. Retrieved March 17, 2012.
- Kramer, Andrew (January 15, 2012). "Russia's Failed Mars Probe Crashes Into Pacific". Retrieved January 16, 2012.
- "Japanese spacecraft reaches asteroid after three-and-a-half-year journey – Spaceflight Now". spaceflightnow.com. Retrieved 2018-09-23.
- "Operation Status for the Asteroid Explorer Hayabusa2, in the vicinity of Ryugu" (PDF). global.jaxa.jp. 19 July 2018. Retrieved 22 September 2018.
- "NASA's OSIRIS-REx Speeds Toward Asteroid Rendezvous". NASA. 9 September 2016. Retrieved 9 September 2016.
- "Asteroid probe begins seven-year quest". BBC News. 9 September 2016. Retrieved 9 September 2016.
- "NASA To Launch New Science Mission To Asteroid In 2016". NASA.
- Hille, Karl (2018-11-16). "OSIRIS-REx is Prepared to TAG an Asteroid". NASA. Retrieved 2018-12-15.
- "Archived copy" (PDF). Archived from the original (PDF) on 2016-12-22. Retrieved 2017-12-29.CS1 maint: Archived copy as title (link)
- "Martian Moons eXploration (MMX) Mission Overview" (PDF). JAXA Tokyo Office: JAXA. 10 April 2017. Retrieved 2018-07-20.
- 火星衛星の砂回収へ JAXA「フォボス」に探査機. Nikkei (in Japanese). September 22, 2017. Retrieved 2018-07-20.
- Visions and Voyages for Planetary Science in the Decade 2013–2022, National Academies Press.
- English.news.cn (2012-10-10). "China considers more Mars probes before 2030". news.xinhuanet.com. Retrieved 2012-10-14.
- Staff Writers Beijing (AFP) (2012-10-10). "China to collect samples from Mars by 2030: Xinhua". marsdaily.com. Retrieved 2012-10-14.
- China's Deep-space Exploration to 2030 by Zou Yongliao Li Wei Ouyang Ziyuan Key Laboratory of Lunar and Deep Space Exploration, National Astronomical Observatories, Chinese Academy of Sciences, Beijing.
- "Comet Surface Sample Return" (PDF). Lunar and Planetary Institute. Retrieved 8 January 2019.
- Finalists in NASA's Spacecraft Sweepstakes: A Drone on Titan, and a Comet-Chaser. Kenneth Chang, The New York Times. 20 November 2017.
- "Stardust, NASA's Comet Sample Return Mission". NASA. Retrieved 11 December 2015.
- "Mir Orbital Debris Collector Data Analyzed". Spacedaily.com. Retrieved 8 July 2018.
- "NASA - Astrobiology Exposure and Micrometeoroid Capture Experiments". www.nasa.gov.
- Mars Exploration: Sample Returns Jet Propulsion Laboratory Mars Exploration Program on sample return missions.
- Stardust Homepage Jet Propulsion Laboratory Stardust mission website.
- Genesis Mission Homepage Jet Propulsion Laboratory Genesis mission website.
- Stardust: Aerogel Stardust website on aerogel technology.
- JAXA Hayabusa JAXA Hayabusa project update.
- MarsNews.com: Mars Sample Return MarsNews.com on Mars Sample Return missions.
- Texas Space Grant Consortium: Missions to the Moon A list of missions to the Moon from 1958 to 1998.
- Evaluating the Biological Potential in Samples Returned from Planetary Satellites and Small Solar System Bodies The National Academies, Space Science Board 1998 |
For winter solstice, crowds usually gather at Stonehenge to watch the Sun set between the uprights of the tallest trilithon. That practice has been taking place since our ancient ancestors erected the sarsen stones about 2500 BC. But there is more to Stonehenge than observing its alignments to the sunrise and sunset at solstice. When people gather for rituals, they speak and make music—sounds that are amplified and altered by reflections from the stones. To fully understand Stonehenge, visitors need to look beyond its appearance, including the archaeological artifacts dug up at the site, to quantify how the monument’s acoustics altered its sounds and how the stones’ prehistoric geometry might have influenced what went on there.
Sunrise and sunset at solstice can still be experienced at the site. Although it is possible to get a sense of scale and be awed by the staggering feat of construction, listening to the current structure gives a misleading impression of what our ancestors heard in the late Neolithic period and early Bronze Age. The current thinking is that around 2200 BC the monument had 157 stones. That’s roughly double the number of stones and fragments that are left at the modern ruin, and many of those are now displaced or fallen over.
I got interested in ancient sites such as Stonehenge when I wrote about sounds of the past for my 2014 book Sonic Wonderland. While researching the topic, I realized that no one had investigated prehistoric stone circles by using acoustic scale models. That awareness prompted me to construct such a model on a 1:12 scale, as seen in the photo. Two research questions I and my collaborators—acoustician Bruno Fazenda (University of Salford) and archaeologist Susan Greaney (the nonprofit English Heritage)—wanted to address were, How is sound altered by the stones? and What does that reveal about where rituals might have taken place in the structure?
Making the model
Constructing a scale model is a major challenge, but the method provides a more accurate simulation of diffraction than can be achieved with current computer models. For large spaces, computer-modeling techniques are commonly based on ray tracing. And they are physically accurate only for high frequencies, at which the wavelength is smaller than the dimensions of the reflecting surfaces. The frequency range relevant to speech and music spans 100 Hz (3.4 m wavelength) to 5000 Hz (7 cm wavelength). With the narrowest stone 40 cm wide and the tallest 6.3 m high, geometric computer models are problematic for much of that bandwidth. It is possible to solve the wave equation to model diffraction and get more accurate results than ray-tracing methods, but the calculations would require too much time.
Acoustic scale modeling has been used in architectural acoustics since the 1930s. And even today, acoustic consultants make physical models when they are designing the most prestigious auditoriums. The technique is appealing because it can capture wave effects, such as interference and complex reflections from the stones. But for the approach to work, it is necessary to use a smaller wavelength. In our 1:12 scale model of Stonehenge, we used sound waves at 12 times their normal frequency because that preserves the relative size of the sound wavelength and stone dimensions.
People often ask about the materials in our model. Why aren’t the stones on grass, for example? We needed to match the materials’ reflection properties and take into account that measurements take place at ultrasonic frequencies. Were the model on grass, ground absorption would have been far too high. (The absorption coefficient of ground at 12 000 Hz in the model must match that of the real site at 1000 Hz.) We found that medium-density fiberboard provides a close proxy at 12 000 Hz.
The same reasoning explains why the stones need not be made of rock. Some of the model stones were three-dimensional printed plastic hollows, backfilled with concrete to make them heavy enough to reflect sound efficiently. Others were molded using a plaster–polymer mix. All were sealed with an automotive, cellulose spray paint to prevent sounds from penetrating surface pores. The approach was more than mere convenience. The time required to 3D print all 157 stones was estimated to take nine months.
We had to accurately create features of the model—the size, shape, and location of the stones—because sound from the henge primarily loses energy between the outer stones and into the sky. We drew on the latest archaeological evidence for the stone arrangements. Historic England, a public organization that helps protect the country’s historic environment, provided a computer model showing the geometry of reconstruction as Stonehenge appeared in 2200 BC, a time when its usage likely peaked. Those were the starting points for our physical model.
Flutes, horns, and drums
Getting recording equipment to work at broadband frequencies in the ultrasonic region is no easy task. In the absence of a compact omnidirectional source, we arranged four tweeters—each pointed outward on a square—inside the model. The speakers emitted frequencies up to 70 000 Hz that we could record. To characterize the space, we used a single microphone and incrementally moved it to 24 positions inside the henge and just outside its border. At each position we measured the speaker’s short, sharp impulses made elsewhere in the model.
Those recordings captured the sound directly from source to microphone, followed by the thousands of reflections that came from the stones. From the impulse responses, we calculated a series of parameters that relate to human perception. The first was reverberation time: how long it takes the sound to decay by 60 dB after the source is switched off. In our scale model of Stonehenge, the average midfrequency reverberation time was 0.64 ± 0.03 seconds. A large movie theater exhibits similar decay times.
For a space with no roof and spaces between the stones for sound to escape, that’s a remarkably long reverberation time. Reverberation occurs because horizontally propagating sound reflects repeatedly between the many stones. And although the time is significantly less than would be recommended for listening to today’s music, even a small amount of reverberation improves the perception of music across genres. Indeed, sound engineers describe reverberation as “aural ketchup” because it improves anything to which it’s added.
It is impossible to know what sounds our ancestors were making at Stonehenge, but musical instruments certainly existed when it was built. Archaeologists have evidence of ancient bone flutes, wooden pipes, animal horns, and drums from Neolithic Britain and Europe. And singing, almost certainly, would have been pervasive at the time—although that leaves no archaeological trace.
Another key parameter we analyzed was the amplification provided by the stones’ reflections. Across all the measurement positions, they amplified the sounds of speech by, on average, 4.3 dB. The smallest difference in level we can hear is about 1 dB, whereas a 10 dB increase is heard as a doubling in loudness. Thus the amplification in Stonehenge would have made communication easier and especially helpful if a speaker was facing away from the audience.
What’s more, the acoustic enhancement of amplification and reverberation happened only when speakers, music makers, and listeners were in the stone circle. Any sounds they created were best for others inside the structure rather than for a bigger audience outside, whose view of the interior would have been obscured. A large number of people were required to transport the stones and construct the monument, but apparently only a small number of people—possibly fewer than 50 within the central horseshoe of bluestones—were able or allowed to fully participate and witness rituals in the stone circle.
I appreciate the work of Bruno Fazenda and Susan Greaney for their collaboration on the project.
Trevor Cox (firstname.lastname@example.org) is a professor of acoustic engineering at the University of Salford in the UK. |
JUMP TO TOPIC
Triangles may seem like simple figures, but the mathematics behind them is deep enough to be considered its own subject: trigonometry.
As the name suggests, trigonometry is the study of triangles. More specifically, trigonometry deals with the relationships between angles and sides in triangles.
Somewhat surprisingly, the trigonometric ratios can also provide a richer understanding of circles. These ratios are often used in calculus as well as many branches of science including physics, engineering, and astronomy.
The resources in this guide cover the basics of trigonometry, including a definition of trigonometric ratios and functions. They then go over how to use these functions in problems and how to graph them.
Finally, this resource guide concludes with an explanation of the most common trigonometric identities.
Trigonometry especially deals with the ratios of sides in a right triangle, which can be used to determine the measure of an angle. These ratios are called trigonometric functions, and the most basic ones are sine and cosine.
These two functions are used to define the other well-known trigonometric functions: tangent, secant, cosecant, and cotangent.
This section begins by reviewing right triangles and explaining the basic trigonometric functions. It also explains their reciprocals. The topic also covers how to evaluate trigonometric angles, especially the special angles of 30-, 45-, and 60-degrees.
Finally, the guide to this topic covers how to deal with the inverses of trigonometric functions and the two most common ways to measure angles.
- Identify the Sides of Right Triangles
- Trigonometric Functions or Trig. Ratios
- Review of Sine, Cosine, and Tangent
- Secant, Cosecant, Cotangent
- Sin, Cos, Tan, Sec, Csc, Cot
- Evaluate Trigonometric Angles
- Special Angles: 30-Degrees, 45-Degrees, 60-Degrees
- Using a Calculator
- Inverse Trigonometry
- Degrees and Radians
Applications of Trigonometry
There are actually a wide variety of theoretical and practical applications for trigonometric functions. They can be used to find missing sides or angles in a triangle, but they can also be used to find the length of support beams for a bridge or the height of a tall object based on a shadow.
This topic covers different types of trigonometry problems and how the basic trigonometric functions can be used to find unknown side lengths. It also covers how they can be used to find angles and even the area of a triangle.
Finally, this section concludes with subtopics on the Laws of Sines and the Law of Cosines.
- Trigonometry Problems
- Sine Problems
- Cosine Problems
- Tangent Problems
- Find Unknown Sides of Right Angles
- Find Height of Object Using Trigonometry
- Trigonometry Applications
- Angle of Elevation and Depression
- Area of Triangle Using the Sine Function
- Law of Sines or Sine Rule
- Law of Cosines or Cosine Rule
Trigonometry in the Cartesian Plane
Trigonometry in the Cartesian Plane is centered around the unit circle. That is, the circle centered at the point (0, 0) with a radius of 1. Any line connecting the origin with a point on the circle can be constructed as a right triangle with a hypotenuse of length 1. The lengths of the legs of the triangle provide insight into the trigonometric functions. The cyclic nature of the unit circle also reveals patterns in the functions that are useful for graphing.
This topic begins with a description of angles at the standard position and coterminal angles before explaining the unit circle and reference angles. It then covers how the values of the trigonometric functions change based on the quadrant of the Cartesian Plane. Finally, this section ends by explaining how the unit circle and the xy-plane can be used to solve trigonometry problems.
- Angles at Standard Position and Coterminal Angles
- Unit Circle
- Reference Angle
- Trigonometric Ratios in the Four Quadrants
- Finding the Quadrant in Which an Angle Lies
- Coterminal Angles
- Trigonometric Functions in the Cartesian Plane
- Degrees and Radians
- Evaluating Trigonometric Functions for an Angles, Given a Point on the Angle
- Evaluating Trigonometric Functions Using the Reference Angle
- Finding Trigonometric Values Given One Trigonometric Value/Other Info
- Evaluating Trigonometric Functions at Important Angles
Graphs of Trigonometric Functions
Although the unit circle in the Cartesian plane provides into trigonometric functions, each of these functions also has its own graph. These graphs are cyclic in nature. Typically, graphs of trig functions make the most sense when the x-axis is divided into intervals of pi radians while the y-axis is still divided into intervals of whole numbers.
This topic covers the basic graphs of sine, cosine, and tangent. It then discusses transformations of those graphs and their properties. Finally, the topic concludes with a subtopic about the graphs of the reciprocals of the basic trig functions.
- Trigonometry Graphs
- Sine Graph
- Cosine Graph
- Tangent Graph
- Transformations of Trigonometric Graphs
- Graphing Sine and Cosine with Different Coefficients
- Maximum and Minimum Values of Sine and Cosine Functions
- Graphing Trig Functions: Amplitude, Period, Vertical, and Horizontal Shifts
- Tangent, Cotangent, Secant, Cosecant Graphs
This is the point where trigonometric functions take on a life of their own apart from their basis in triangle side ratios. The functions contain numerous identities that illuminate the relationship between different types of trig functions.
These identities can be used to find the values of angles outside the common reference angles. In fact, they were the main tool available for doing that before calculators.
This topic explains trigonometric identities and how to find and remember them. It also explains how to use the identities to simplify expressions, which involves a fair amount of algebraic manipulation.
The guide goes on to explain how to find the values of different angles based on reference angles with the sum and difference identities and the double-angle and half-angle formulas. The topic continues and concludes with more ways to simplify, factor, and solve trigonometric equations.
- Trigonometric Identities
- Trigonometric Identities: How to Derive/ Remember Them
- Using Trigonometric Identities to Simplify Expressions
- Sum and Difference Identities
- Double-Angle and Half-Angle Formulas
- Trigonometric Equations
- Simplifying Trigonometric Expressions Using Trig Identities
- Simplifying Trigonometric Expressions Involving Fractions
- Simplifying Products of Binomials Involving Trigonometric Functions
- Factoring and Simplifying Trigonometric Expressions
- Solving Trigonometric Equations
- Solving Trigonometric Equations Using Factoring
- Examples with Trigonometric Functions: Even, Odd, or Neither
- Proving a Trigonometric Identity |
Random articlesArea of a Sphere Area of Isosceles Triangle Area of a Circle Volume of a Cone Prime Factors, Fundamental Theorem of Arithmetic Even and Odd Numbers Money
A Tangent to a Circle is a straight line that touches the circle at only one point.
A Normal line, is a straight line that is perpendicular to the tangent line, meaning at a right
As a consequence, the Normal line touches or passes through the center of the circle.
As they are straight lines perpendicular to each other at an angle of 90°, the gradient of
the Tangent to a Circle and the Normal line multiplied together is equal to -1.
As a Tangent and Normal are straight lines, their equations will have the form:
A standard circle with center the origin (0,0), has equation
x2 + y2 = r2.
Where r is the circle radius.
A circle has equation x2 + y2 = 34.
The point A (5,3) lies on the edge of the circle. Where there is a Tangent line touching, along with a corresponding Normal line.
It can often help to illustrate the situation with an image.
We can proceed with using the point slope form to establish the equation of the Tangent line.
Show that the straight line with equation y = 2x + 5,
is a tangent to the circle with equation x2 + y2 = 5.
If a straight line is a tangent to a circle, there will only be 1 point of intersection.
We substitute the straight line equation into the circle equation and solve for x.
If the straight line is a tangent, there will only be one x value that solves.
Only one solution obtianed, confirming that the straight line is a tangent to the circle.
A circle can also have a center that is NOT the origin (0,0).
Instead the center is a point somewhere else on the x-y axis, a point (a,b).
Such a circle has equation (x − a)2 + (y − b)2 = r2.
Where again r is the radius of the circle.
Like before, an image illustrating the situation can be helpful.
The Normal line doesn't always have to be drawn completely.
We have the center point, and the point A on the circle edge.
These points can give us the gradient of the Normal line, which can then give us the gradient of the Tangent line.
Now have enough information to establish the equation of the Tangent line at point A.
Like with the similar previous example (1.2).
Put the straight line equation into the circle equation and solve for x.
If there is only one x value, then there is only one point of intersection and the straight line is therefore a tangent to the circle.
First put the straight line equation into the form "y =".
The straight line is a tangent to the circle. |
Radicals: Rational and Irrational Numbers
We write, for example,
"The square root of 25 is 5."
This mark is called the radical sign (after the Latin radix = root). The number under the radical sign is called the radicand. In the example, 25 is the radicand.
Problem 1. Evaluate the following.
To see the answer, pass your mouse over the colored area.
Example 1. Evaluate .
Solution. = 13.
For, 13 · 13 is a square number. And the square root of 13 · 13 is 13!
If a is any whole number, then a ·a is a square number, and
Problem 2. Evaluate the following.
We can state the following theorem:
A square number times a square number is itself a square number.
36 · 81 = 6 · 6 · 9 · 9 = 6 · 9 · 6 · 9 = 54 · 54
Problem 3. Without multiplying the given square numbers, each product of square numbers is equal to what square number?
a) 25 · 64 = 5 · 8 · 5 · 8 = 40 · 40
b) 16 · 49 = 4 · 7 · 4 · 7 = 28 · 28
c) 4 · 9 · 25 = 2 · 3 · 5 · 2 · 3 · 5 = 30 · 30
Rational and irrational numbers
A rational numbers is simply any number of arithmetic: any whole number, fraction, mixed number, or decimal; together with its negative images. A rational number has the same ratio to 1 as two natural numbers.
That is what a rational number is. As for what it looks like, it can take the form of a fraction , where a and b are integers (b ≠ 0).
Problem 4. Which of the following numbers are rational?
All of them.
At this point, the student might wonder, What is a number that is not rational?
An example of such a number is ("Square root of 2"). is not a number of arithmetic. is close because
-- which is almost 2.
To see that there is no rational number whose square is 2, suppose there were. Then we could express it in as a fraction in lowest terms. But the square of a fraction in lowest terms is also in lowest terms.
No new factors are introduced and the denominator will never divide into the numerator to give 2—or any whole number.
There is no rational number whose square is 2 or any number that is not a perfect square. We say therefore that is an irrational number.
As a decimal approximation,
(The wavy equal sign means "is approximately".)
How could we know that? By multiplying 1.414 by itself. If we do, we get 1.999396, which is almost 2. But it should be clear that no decimal multiplied by itself can ever be exactly 2.000000000000000000. If the decimal ends in 1, then its square will end in 1. If the decimal ends it 2, its square will end in 4. And so on. No decimal—no number of arithmetic—multiplied by itself can ever produce 2.
Answer. Only the square roots of square numbers.
= 1 Rational
= 2 Rational
, , , Irrational
= 3 Rational
And so on.
Problem 5. Say the name of each number.
Problem 6. Which of the following numbers are rational and which are irrational?
a) Irrational b) Rational
c) Rational d) Irrational
Only a rational number can we know and name exactly. An irrational number we can know only as a rational approximation.
For the decimal representation of both irrational and rational numbers, see Topic 2 of Precalculus.
An equation x² = a, and the principal square root
Example 2. Solve this equation:
We say however that the positive value 5 is the principal square root. That is, we say that "the square root of 25" is 5.
As for −5, it is "the negative of the square root of 25."
− = −5.
Thus the symbol refers to one non-negative number.
Example 3. Solve this equation:
Always, if an equation looks like this,
Problem 7. Solve for x.
Please make a donation to keep TheMathPage online.
Copyright © 2020 Lawrence Spector
Questions or comments? |
Learn what the inverse of a function is, and how to evaluate inverses of functions that are given in tables or graphs.
Inverse functions, in the most general sense, are functions that "reverse" each other.
For example, here we see that function takes to , to , and to .
The inverse of , denoted (and read as " inverse"), will reverse this mapping. Function takes to , to , and to .
Which of the following is a true statement?
Defining inverse functions
In general, if a function takes to , then the inverse function, , takes to .
From this, we have the formal definition of inverse functions:
Let's dig further into this definition by working through a couple of examples.
Example 1: Mapping diagram
Suppose function is defined by mapping diagram above. What is ?
We are given information about function and are asked a question about function . Since inverse functions reverse each other, we need to reverse our thinking.
Specifically, to find , we can find the input of whose output is . This is because if , then by definition of inverses, .
From the mapping diagram, we see that , and so .
Check your understanding
Example 2: Graph
This is the graph of function . Let's find .
To find , we can find the input of that corresponds to an output of . This is because if , then by definition of inverses, .
From the graph, we see that .
Check your understanding
What is ?
Given that , what is ?
A graphical connection
The examples above have shown us the algebraic connection between a function and its inverse, but there is also a graphical connection!
Consider function , given in the graph and in a table of values.
We can reverse the inputs and outputs of function to find the inputs and outputs of function . So if is on the graph of , then will be on the graph of .
This gives us these graph and table of values of .
Looking at the graphs together, we see that the graph of and the graph of are reflections across the line .
This will be true in general; the graph of a function and its inverse are reflections over the line .
Check your understanding
This is the graph of .
Which is the best choice for the graph of ?
The graph of is a line segment joining the points and .
Drag the endpoints of the solid segment below to graph .
Why study inverses?
It may seem arbitrary to be interested in inverse functions but in fact we use them all the time!
Consider that the equation can be used to convert the temperature in degrees Fahrenheit, , to a temperature in degrees Celsius, .
But suppose we wanted an equation that did the reverse – that converted a temperature in degrees Celsius to a temperature in degrees Fahrenheit. This describes the function , or the inverse function.
On a more basic level, we solve many equations in mathematics, by "isolating the variable". When we isolate the variable, we "undo" what is around it. In this way, we are using the idea of inverse functions to solve equations.
Want to join the conversation?
- how would I find the inverse function of a quadratic, such as 2x^2+2x-1?(39 votes)
- You can find the inverse of any function y=f(x) by reflecting it across the line y=x. The quadratic you list is not one-to-one, so you will have to restrict the domain to make it invertible.
Algebraically reflecting a graph across the line y=x is the same as switching the x and y variables and then resolving for y in terms of x.
As you progress in your ability to find inverse functions you can see Sal solve for an inverse of a quadratic function here:
But i highly recommend you make sure you can find the inverse of a linear function first before tackling quadratics and the associated domain restriction complications that they bring. If after working through that video and the subsequent examples, you would be better served posting your question there if you still aren't sure.(78 votes)
- Is it true that when you solve for an inverse of a function, you do PEMDAS backwards?(17 votes)
- Nice question!
Yes you could think of it that way. If a function can be constructed by starting with x and performing a sequence of (reversible) operations, then its inverse can be constructed by starting with x and both reversing each operation and reversing the order of operations.
Example: Suppose f(x) = 7(x - 5)^3. Note that f(x) is constructed by starting with x, subtracting 5, cubing, and then multiplying by 7.
Then f^-1(x) is constructed by starting with x, dividing by 7, taking the cube root, and then adding 5.
So f^-1(x) = cuberoot(x/7) + 5.(43 votes)
- Why is the inverse always a reflection? Is it simply two lines that have the same set of reversed relationships, because plugging in the answer does not make a full restitution, instead it gives the same original value of x in a different line? Is there another reason for this? I am fascinated.(7 votes)
- We can think of a function as a collection of points in the plane. Each point has the form (x, y). If we consider the inverse function, it will contain each of these points, but with the coordinates switched.
So if (a, b) is on our original function, then (b, a) is on the inverse. Let's look at how we get from (a, b) to (b, a). Draw a line segment between them.
The slope of this line segment is then (b-a)/(a-b)=(-1)(b-a)/(b-a)= -1. That's interesting; if we have a point on a function and want to find the corresponding point on the inverse function, we slide along a line of slope -1. But how far do we slide?
Let's find the midpoint of our line segment. In the x-direction, we go from a to b. So the midpoint has the x-coordinate (a+b)/2. In the y-direction, we go from b to a. So the midpoint has y-coordinate (b+a)/2. Same as the x-coordinate!
So the midpoint of the segment must lie on the line y=x. Notice that y=x has a slope of 1, and our segment has a slope of -1. So the two are perpendicular.
So what we've done to move from (a, b) to (b, a) is reflect over the line y=x.(15 votes)
- i have trouble understanding inverses. Can someone help me?
i have trouble solving problems for the inverses.(5 votes)
- An inverse function essentially undoes the effects of the original function. If f(x) says to multiply by 2 and then add 1, then the inverse f(x) will say to subtract 1 and then divide by 2. If you want to think about this graphically, f(x) and its inverse function will be reflections across the line y = x.
To find the inverse of a function you just have to switch the x and the y and then solve for y.
For example, what is the inverse of y = 2x + 1?
y = 2x + 1
x = 2y + 1. (Switch the x and y)
2y = x - 1
y = (x-1)/2. And we're done.(10 votes)
- i dont understand(7 votes)
- A point on a line f(x), lets say (2,1), when flipped perpendicularly, makes (1,2). In the same way, when you extend the two lines(f(x) and f(x)inverse) to touch y = x, they're perpendicular(4 votes)
- What about when you have multiple outputs for the function how do you solve the inverse?(3 votes)
- Good question. This actually happens in the case of inverse trigonometric functions, where one input gives infinite outputs. In this case, we restrict the range of the functions so that only a set amount of outputs are possible. For example, sin^(-1)(x) will only output values between [-pi/2,pi/2].(8 votes)
- What about 3D graphs...or complex planes? Do inverse functions math work or is it just vectors?(2 votes)
- Yes, inverse functions work in 3D graphs and complex planes, not just in vectors.
In mathematics, a 3D graph is a graph that shows a three-dimensional representation of a function or a set of data points. It is represented by three axes: x, y, and z. The x and y-axes represent the horizontal and vertical dimensions, respectively, while the z-axis represents the depth or height dimension.
A complex plane is a two-dimensional plane that represents complex numbers. It is represented by two axes: the real axis and the imaginary axis. The real axis represents the real part of the complex number, while the imaginary axis represents the imaginary part of the complex number.
Inverse functions can be graphed in 3D graphs and complex planes, just like in two-dimensional graphs. The graph of the inverse function is obtained by reflecting the original graph across the line y = x. The inverse function is defined only if the original function is one-to-one, which means that each input has a unique output.
Vectors are also used in 3D graphs, but they are not the only mathematical concept used. Vector functions are used to describe curves and surfaces in three-dimensional space.
In summary, inverse functions work in 3D graphs and complex planes, and they are graphed by reflecting the original graph across the line y = x. Vectors are also used in 3D graphs, but they are not the only mathematical concept used.(8 votes)
- How did you get -3 for the second example? I see no correlation between the -7 and -3...(4 votes)
- you should put -7 on the left side of the equation; f(x)=-7=3x-2 and solve the equation for x, you get 3. Now, knowing that x is the reverse function of y or f(x) which is 3, so f-1(x)=x=3(1 vote)
- domain of f(x) is the range of inverse function and domain of inverse function is the range of f(x). but it is not true in some cases like f(x) = √2x-3. if we see domain of this function is x>=3/2 and inverse of this function is x^2/2+3/2 domain of this function is all real numbers . so, acc. to above line it says range of f(x) is all real numbers but in actual and it is x>=0.(3 votes)
- If f(x) = √(2x-3)
Domain = x>= 3/2; Range = y >= 0
Then for the inverse of f(x) = x^2/2+3/2
Domain = x >= 0; Range = y >= 3/2
If you widen the domain for the inverse function to x = any real number, then you will have input values for the inverse that can not be used in the original function. If you truly want the 2 functions to be inverses, you need to maintain the restrictions on domain/range for the 2 function.
Hope this makes sense.(5 votes) |
The Earth ’s average temperature grew by around 1 degree Fahrenheit throughout the 20th century, according to NASA. The consequences of this minor temperature increase are diverse, from prolonged dry seasons and heat waves to more violent hurricanes. Rising sea levels, extreme weather, warming oceans and melting glaciers all have been significant signs that there’s something wrong happening with the world’s nature, which is climate change. In this article we tackle the causes of climate change that people blindly do every day, having no idea about the great impact they leave on the whole world. Activities such as deforestation, industrial production’s emissions and so many other factors affect both the local and global climate having the worst impacts on the different aspects of life in Egypt such as health, ecosystems, agriculture, economy, water resources, etc. Possible solutions such as wind powers, green buildings, chemical absorption and adsorption to capture the greenhouse gases from the atmosphere transferring them into functional elements is also provided in this article.
There has been a lot of scientific evidence proving that the climate is alternating and the Earth is getting warmer as following:
- Extreme weather. Extremely hot temperatures are expected to increase wherever mean temperatures rise, for example, will intensify extreme temperatures as soils dry out and fail to provide evaporative cooling at moderate temperatures, thus expanding the distribution of summer high daily temperatures in continental interiors. Atmospheric warming increases the atmospheric moisture holding ability potentially raising the frequency of severe rainfall events. In cities, roads and infrastructure can be heated to 50 to 90 degrees warmer than the air (Thomas Wernberg, 2012).
- Warming oceans. General expectations for biological and ecological responses to warming oceans include shifts in pole distribution, earlier spring events and delayed autumn events in mid-to-high latitudes, and reductions in marine ectothermic body sizes (Elvira S. Poloczanska, 2016).
- Melting glaciers. Kilimanjaro 's famous snows have melted over 80 percent since 1912. Glaciers in India's Garhwal Himalayas are so fast receding that researchers believe most central and eastern Himalayan glaciers could virtually disappear by 2035. Repeated laser altimeter readings by NASA indicate a diminishing edge of Greenland's ice cap. From the Arctic to Peru, from Switzerland to the Indonesian Man Jaya Equatorial Glaciers, huge ice sheets, gigantic glaciers and sea ice are gradually vanishing (Daniel Glick, 2020).
- Rising sea levels. Melting ice, shifting surface winds and expanding warming ocean water all lead to changes in sea level that vary from one place to another. According to the latest BAMS State of the Atmosphere Study for 2018, acceleration of SLR (Sea Level Rise) over the post-1993 timeframe is about 0.1mm per year; this indicates that the SLR rate rises per decade by 1mm per year (Zeke Hausfather, 2019).
Climate Change Causes
There’s a variety of factors causing climate change, mostly are caused by human as well as natural factors as following:
- Deforestation. It’s known that green plants get their nutrition through the process of photosynthesis in which plants consume a great amount of CO2 and release O2 as a byproduct. Deforestation and cutting off trees minimized the available green soldiers that fight climate change protecting humans. Human beings have been widely replacing the green areas with buildings.
- Ozone layer depletion. Ozone is a natural as well as man-made synthetic gas. The ozone layer in the upper atmosphere protects plants as well as animal life from the Ultraviolet and Infrared radiations released by the sun which is known to cause great harm to plants, animals and humans. Though the ozone gas in the lower atmosphere is considered a pollutant, dissimilar to the other greenhouse gases, the ozone gas is only limited to the industrial zones. The ozone layer is damaged as the atmosphere is polluted with toxic gases, chemical repellent from industrial production, air conditioning devices, vehicles’ exhaust pipes and refrigerators. These emitted substances such as smoke, sulfur oxide, soot, dust, carbon monoxide (CO), nitrous oxide, chlorofluorocarbons (CFC) and hydrocarbons manage to deteriorate the ozone layer.
- CO2 concentration. CO2 is emitted into the atmosphere through natural mechanisms such as volcanic eruption, animal respiration and plant burning or rotting as well as organic materials. Human activities such as the consuming of petroleum products, industrial wastes, wood products for heating homes, operating cars and the generation of electricity often emit CO2 into the atmosphere. Concentration of CO2 has witnessed a great increase since the mid-1700s because of the Industrial Revolution. The IPCC recorded in 2007 that CO2 rates have risen to a record high of 379 ppm and are rising at a rate of 1.9 ppm per annum. In a higher discharge situation, CO2 is anticipated to reach 970 ppm by 2100, inferring dramatically multiplying the pre-mechanical fixations. Such a pattern in CO2 concentrations is rather troubling and risky, particularly given its negative impacts on farming systems. The utilization of fluid and vaporous fills in the agriculture sector prompted outflows of 51.3 and 5.4 million tons of CO2 respectively.
- Greenhouse effect. Greenhouse gases act like a blanket around the Earth, wrapping energy into the atmosphere which causes Earth’s warming. They consist of hydro-chlorofluorocarbons, carbon dioxide, methane, water vapor, perfluorocarbons, ozone and chlorofluorocarbons. These atmospheric gases concentrate because of creations that consume petroleum derivatives just as different activities, for example, clearing green lands for agriculture or buildings, and cause the Earth's atmosphere to get warmer than it would normally. Naturally, greenhouse gases are part of the atmosphere’s components, and they are the result of human activities as well. Water vapor is the most widespread greenhouse gas, released into the atmosphere through evaporation from oceans, seas, lakes and rivers. Carbon dioxide, hydrogen, nitrous oxide, and ozone also exist naturally in the atmosphere, but human actions are now generating them at high amounts. Chemicals produced to function as greenhouse gases contain chlorofluorocarbons (CFCs), hydrochlorofluorocarbons (HCFCs), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs).
- Aerosols. It’s known as the airborne particles absorbing, dispersing and reflecting radiation back into space. Haze, windblown residue, and particles that can be followed to emitting volcanoes are instances of natural aerosols. Human behaviors such as fossil-fuel combustion and slash-and - burn agricultural methods produce additional aerosols. Although aerosols are not deemed a heat-trapping greenhouse gas, they influence the flow of radiated heat energy from the Earth into space. Its effects on climate change are still under discussion, but climate scientists agree that light-colored aerosols have a cooling effect whereas dark-colored aerosols lead to heating.
- Agriculture. Agriculture is also a factor of climate change. Clearing trees for crops, burning crop residues, submerging land in rice paddies, growing huge numbers of cattle and other ruminants and nitrogen fertilization which unleash greenhouse gasses into the atmosphere (Usman Adamu, 2012).
Consequences of Climate Change on Egypt
Egypt is expected to experience the first effects of climate change in the water domain. With an individual's portion of just less than 1000 m3 a year, Egypt is so near to the poverty line. The Nile River gives over 95% of all water to Egypt and the yearly precipitation differs from a limit of 180 mm/year on the North coast, to a normal of 20 mm close to the city of Cairo and decreases to as meager as 2 mm near the city of Aswan in upper Egypt. Climate change is expected to affect both the supply the needs of the Nile River as well as demand the population needs of water. Effects on the supply side are probably going to emerge from potential changes of precipitation designs over the Ethiopian highlands and equatorial lakes. The set-up may be exacerbated by declining rainfall in the upper White and Blue Nile catchments and the Middle Nile Basin (Mohamed Saber, 2006).
Climate Change Impact on Egypt’s Agriculture
Climate change is a great enemy to Egypt’s agriculture as agriculture plays a diverse function in the rural and national social and economic structures. Climate change can possibly both emphatically and adversely influence the area, timing, and profitability of harvest, domesticated animals, and fishery frameworks at national, and worldwide scales. It will likewise adjust the stability of food supplies and make new food security challenges by 2050. Agriculture represents a major aspect of the national economy as it balances Egyptian trade. Nonetheless, climate change will influence the amount of produce accessible for export and import just as costs. The land-use and farming and financial action of Egypt are totally obliged along a restricted T-shaped segment of land along the Nile and the coast around its delta. There is a chance of a significant decline in Nile streamflow under climate change. The water supply of the Nile is likely to be significantly strained due to increased water demands and evaporative losses arising from rising temperatures in the semi-arid zone which are regularly forecast through various climate models (Mahmoud A. Medany, 2016).
Plants Health and Food Insufficiency
Egyptian agriculture poses two significant possible threats. The first is that the Nile River’s water resources will probably drop by 30 to 60 percent as a projected effect of climate change. Second, all projections indicate that rain-fed production in North Africa would decline to 50 percent due to climate change. It is important to keep in mind that temperature controls seasonal crops and their geographical distribution. Owing to climate change and water scarcity, major crops in Egypt (wheat, maize, clover, rice, cotton, sugar cane, corn, sorghum and soybeans) are projected to decrease. A doubling of CO2 may greatly raise photosynthesis, however crop harvests would decrease due to water shortages and heat-related harm to plant pollination, flowering, and grain forming. By 2050 decrease in yields because climate change is relied upon to arrive at 19% for maize and sorghum and 11% for rice, 28% for soybean, 18% for wheat and grain, while that of cotton would be increased. Hotter and drier weather would expand the desertification-prone region and would also be exacerbated by increased deforestation and soil fertility declines (Mohamed Saber, 2006).
Consequences for Sea Level in the Nile Delta
Questions regarding whether the Nile Delta will be overflowed before the twenty-first century ends by the action of climate change got a lot of attention. The Egyptian coast is associated with the Mediterranean through the southern Levantine sub-bowl, which reaches out from 25°E in the west to 34.5°E in the east. The Egyptian Mediterranean beachfront zone incorporates five enormous lakes just as vacationer resorts, historical places, ripe agrarian grounds, and economic resources including natural gas. The Nile Delta’s coastal zone is predicted to be one of the five zones anticipated to endure the most noticeably terrible consequences of a 1.0 m increment in ocean level (SLR). El-Raey (2010) predicted that nearly half of the Nile Delta beaches and around 30 percent of Alexandria and Port Said cities would be degraded and destroyed even with an SLR of just 0.5 m. The coast of the Nile Delta plays an important role in both economy and social issues. This is primarily attributed to the large population densities of this region, high rates of poverty, and diversified economic activities, including natural gas production, farming, producing energy, tourism, and agriculture. According to the Egyptian Environmental Affairs Agency (1999), the agricultural sector employs about 35 percent of Egyptians and generates 14.8 percent of Egyptian gross domestic product (GDP), a sector that is especially significant in the Nile Delta region. SLR will reduce Egyptian GDP by damaging the agricultural sector and changing coastal lake ecology (Mohamed Shaltout, 2015).
Loss of Biodiversity and Habitats
The main habitats in Egypt keep the biodiversity maintained, but unfortunately, climate change will have a great negative effect on them, so we need to keep them protected. Areas located near the northern lakes of Egypt, aquatic ecosystems, natural mangrove vegetation of the Red Sea, habitats of the eastern desert, and marginal pastures in Sinai are predicted to be adversely affected, despite the different response of each of them. In the Southern Valley and the Western Desert habitats, the water requirement of fields and crops will increase alongside with the expected increase of temperature. A critical number of presently compromised animals could be lost as coastal habitats are lost, and rivals conquer native communities. The Red Sea owns the most impressive coral reefs in the world, having a large degree of biodiversity of more than 1,000 organisms recorded, with many more yet to be discovered. They are especially vulnerable to changes in sea surface temperatures, and corals can lack symbiotic algae when physiologically stressed, which provide nutrients and colors. In this case, corals seem white and are alluded to as bleached. Two cases of bleaching of coral reefs in Egypt had been observed in 2006. During the intense low tide, the first coral reefs were subjected to clear oxygen and thus lost its vitality. This phenomenon proceeded for a couple of days during spring season, where a few zones were as yet influenced and didn't recoup till now. Biological diversity has numerous advantages for individuals, its various types and species contribute in giving horticultural, angling and domesticated animals administrations, logical research and social legacy. With its hereditary parts, a few types of vegetation help in the advancement of the clinical, agrarian and modern areas. It also provides the daily necessities for the lives of many local communities, benefits for biodiversity, and nature tourism with its great economic potential (Mohamed Saber, 2006).
Consequences on Human Health in Egypt
Climate change is predicted to have negative effects on human health in Egypt, exacerbated by high population densities. This may include increased incidence of asthma and infectious illnesses, vector-borne diseases, neurological disorders, skin cancer, eye cataracts, respiratory problems, heat strokes, deteriorating public health systems, as well as extra deaths from cardiovascular and respiratory diseases, diarrhea and dysenteric infections, mortality rates for children and malnutrition. The overall health contribution balance is likely to be unfavorable, and communities in low-income countries such as Egypt are likely to be more susceptible to the side effects (Mohamed Saber, 2006).
Solutions offered for the climate change issue are still under study. There’s an argument whether the solutions are economically affordable or it needs too much to manage. Among the successful strategies for limiting the progression of climate change are the followings:
It’s known that wind power has been the fastest and vigorous electricity tool in the world since the latest end of the 20th century. Generating energy using wind, a renewable source, through wind turbines almost has no impact on the environment. Additionally, wind turbines often don't require water to run. As per the U.S. Energy Agency, the use of wind turbines in 2013 alone reduced water utilization in the energy market by 36.5 billion gallons. The usage of wind energy in 2013 has lowered CO2 pollution by about 115 million metric tons, equal to pollution of 20 million vehicles over the year (‘Wind Power Benefits’). There is a problem confronting wind power though. One major challenge is the slaying of birds and bats from flying into the spinning blades. However, one way to help tackle the issue of killing birds and bats by the spinning impeller is to stop placing wind turbines in places where there is a high contingent of migrants. Another option is to have the blades of wind turbines just spin above certain wind speed. Scientists find that 99 percent of bat activity has ceased in certain places where the wind speed reaches above 15 mph (Jameel R. Kaddo, 2016).
Due to their dependence on fossil fuels for electricity and air-conditioning, the new buildings emit CO2 which is one of the main factors causing climate change. It’s recommended to use light bulbs which use much less energy and more efficient cooling and heating systems which aim to minimize the CO2 emissions from the buildings. In this manner, that diminishes our reliance on petroleum derivatives for power production about a decrease of ozone depleting substances outflow (Jameel R. Kaddo, 2016).
As noted above, methane is a greenhouse gas that leads to climate change progression. Among the main causes of methane pollution, natural gas and petroleum operations are also known. Upgrading the storage infrastructure, processing and refining oil and gas would reduce the release of methane (Jameel R. Kaddo, 2016).
Solutions Under Review
Two techniques have been suggested to seek to tackle the surplus CO2 produced by the usage of fossil fuels. These 2 methods work at extracting CO2 from the atmosphere and rendering it a functional resource. The first remedy is named chemical intake. Usage of aqueous solution of amine-ammonia to absorb as much CO2 as possible. Firstly, the CO2-containing gas flows through a tube and interacts with a CO2-absorbent flowing in the reverse direction. Upon absorption the CO2-filled absorbent streams through a thermal regeneration stripper. The pure CO2 discharged is squeezed for shipping and storage. However, the high regeneration expenditures of the process, toxic effects, materials oxidation, and its low CO2 capture ability are major setbacks for the process except if enhanced. The second remedy is named adsorption. Some strong adsorbents, such as zeolites, mesoporous silica, microporous organic polymers, metal-organic frameworks (MOFS), and porous carbons have been developed to better absorb CO2. Due to low cost, large supply, chemical and thermal stability, wide specific surface area and pore depth, easy-to-design pore size, surface modification and reduced energy consumption for regeneration, carbon-based materials are therefore the most effective. It has low capability to capture CO2 though. These strong adsorbents will absorb CO2 better by either temperature, pressure or by combining the two (Jameel R. Kaddo, 2016).
Climate change is a challenge affecting our world and it has taken tremendous strides since the Middle Ages. Carbon dioxide emissions have accelerated climate change progress and have stepped up our weather. Shifting to using renewable energy has been difficult as the world depends mainly on fossil fuels for industrialization, electricity and transportation. We need to prevent any potential changes, as well as adaptation, it's an important factor that we need to consider. |
Basics of Python
Python print() function prints the message to the screen or any other standard output device.
Syntax: print(value(s), sep= ‘ ‘, end = ‘\n’, file=file, flush=flush)
value(s) : Any value, and as many as you like. Will be converted to string before printed
sep=’separator’ : (Optional) Specify how to separate the objects, if there is more than one.Default :’ ‘
end=’end’: (Optional) Specify what to print at the end.Default : ‘\n’
file : (Optional) An object with a write method. Default :sys.stdout
Though it is not necessary to pass arguments in the print() function, it requires an empty parenthesis at the end that tells python to execute the function rather calling it by name. Now, let’s explore the optional arguments that can be used with the print() function.
String literals in python’s print statement are primarily used to format or design how a specific string appears when printed using the print() function.
\n : This string literal is used to add a new blank line while printing a statement.
“” : An empty quote (“”) is used to print an empty line.
print(“GeeksforGeeks \n is best for DSA Content.”)
is best for DSA Content.
end= ” ” statement
The end keyword is used to specify the content that is to be printed at the end of the execution of the print() function. By default, it is set to “\n”, which leads to the change of line after the execution of print() statement.
Example : Python print() without new line.
# This line will automatically add a new line before the
# next print statement
print (“GeeksForGeeks is the best platform for DSA content”)
# This print() function ends with “**” as set in the end argument.
print (“GeeksForGeeks is the best platform for DSA content”, end= “**”)
print(“Welcome to GFG”)
GeeksForGeeks is the best platform for DSA content
GeeksForGeeks is the best platform for DSA content**Welcome to GFG
The print() function can accept any number of positional arguments. These arguments can be separated from each other using a “,” separator. These are primarily used for formatting multiple statements in a single print() function.
b = “for”
print(“Geeks”, b , “Geeks”)
Geeks for Geeks
Contrary to popular belief, the print() function doesn’t convert the messages into text on the screen. These are done by lower-level layers of code, that can read data(message) in bytes. The print() function is an interface over these layers, that delegates the actual printing to a stream or file-like object. By default, the print() function is bound to sys.stdout through the file argument.
Example : Python print() to file
# declare a dummy file
dummy_file = io.StringIO()
# add message to the dummy file
print(‘Hello Geeks!!’, file=dummy_file)
# get the value from dummy file
Example : Using print() function in Python
# Python 3.x program showing
# how to print data on
# a screen
# One object is passed
x = 5
# Two objects are passed
print(“x =”, x)
# code for disabling the softspace feature
print(‘G’, ‘F’, ‘G’, sep=”)
# using end argument
x = 5
Running your First Code in Python
Python programs are not compiled, rather they are interpreted. Now, let us move to writing a python code and running it. Please make sure that python is installed on the system you are working on. If it is not installed, download it from here. We will be using python 2.7.
Making a Python file:
Python files are stored with the extension “.py”. Open a text editor and save a file with the name “hello.py”. Open it and write the following code:
Reading the file contents:
Linux System – Move to the directory from the terminal where the created file (hello.py) is stored by using the ‘cd’ command and then type the following in the terminal :
Windows system – Open command prompt and move to the directory where the file is stored by using the ‘cd’ command and then run the file by writing the file name as a command.
Variables in Python
Variables need not be declared first in python. They can be used directly. Variables in python are case-sensitive as most of the other programming languages.
The output is : |
Calibration refers to the act of evaluating and adjusting the precision and accuracy of ultrasonic flaw detector. In ultrasonic testing, several forms of calibration must occur. First, the electronics of the equipment must be calibrated to ensure that they are performing as designed. This operation is usually performed by the equipment manufacturer and will not be discussed further in this material. It is also usually necessary for the operator to perform a “user calibration” of the equipment. This user calibration is necessary because most ultrasonic flaw detectors can be reconfigured for use in a large variety of applications. The user must “calibrate” the system, which includes the equipment settings, the transducer, and the test setup, to validate that the desired level of precision and accuracy are achieved. The term calibration standard is usually only used when an absolute value is measured and in many cases, the standards are traceable back to standards at the National Institute for Standards and Technology.
In ultrasonic testing, there is also a need for reference standards. Reference standards are used to establish a general level of consistency in measurements and to help interpret and quantify the information contained in the received signal. Reference standards are used to validate that the equipment and the setup provide similar results from one day to the next and that similar results are produced by different systems. Reference standards also help the inspector to estimate the size of flaws. In a pulse-echo type setup, signal strength depends on both the size of the flaw and the distance between the flaw and the transducer. The inspector can use a reference standard with an artificially induced flaw of known size and at approximately the same distance away for the transducer to produce a signal. By comparing the signal from the reference standard to that received from the actual flaw, the inspector can estimate the flaw size.
Will discuss some of the more common calibration and reference specimen that are used in ultrasonic testing. Be aware that there are other standards available and that specially designed standards may be required for many applications. The information provided here is intended to serve a general introduction to the standards and not to be instruction on the proper use of the standards.
Calibration and reference standards for ultrasonic testing come in many shapes and sizes. The type of standard used is dependent on the NDE application and the form and shape of the object being evaluated. The material of the reference standard should be the same as the material being inspected and the artificially induced flaw should closely resemble that of the actual flaw. This second requirement is a major limitation of most standard reference samples. Most use drilled holes and notches that do not closely represent real flaws. In most cases the artificially induced defects in reference standards are better reflectors of sound energy (due to their flatter and smoother surfaces) and produce indications that are larger than those that a similar sized flaw would produce. Producing more “realistic” defects is cost prohibitive in most cases and, therefore, the inspector can only make an estimate of the flaw size. Computer programs that allow the inspector to create computer simulated models of the part and flaw may one day lessen this limitation.
- The IIW Type Calibration Block
The International Institute of Welding. It is referred to as an IIW “type” reference block because it was patterned after the “true” IIW block but does not conform to IIW requirements in IIS/IIW-23-59. “True” IIW blocks are only made out of steel (to be precise, killed, open hearth or electric furnace, low-carbon steel in the normalized condition with a grain size of McQuaid-Ehn #8) where IIW “type” blocks can be commercially obtained in a selection of materials. The dimensions of “true” IIW blocks are in metric units while IIW “type” blocks usually have English units. IIW “type” blocks may also include additional calibration and references features such as notches, circular groves, and scales that are not specified by IIW. There are two full-sized and a mini versions of the IIW type blocks. The Mini version is about one-half the size of the full-sized block and weighs only about one-fourth as much. The IIW type US-1 block was derived the basic “true” IIW block and is shown below in the figure on the left. The IIW type US-2 block was developed for US Air Force application and is shown below in the center. The Mini version is shown on the right.
IIW type blocks are used to calibrate instruments for both angle beam and normal incident inspections. Some of their uses include setting metal-distance and sensitivity settings, determining the sound exit point and refracted angle of angle beam transducers, and evaluating depth resolution of normal beam inspection setups. Instructions on using the IIW type blocks can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
- The Miniature Angle-Beam or ROMPAS Calibration Block
The miniature angle-beam is a calibration block that was designed for the US Air Force for use in the field for instrument calibration. The block is much smaller and lighter than the IIW block but performs many of the same functions. The miniature angle-beam block can be used to check the beam angle and exit point of the transducer. The block can also be used to make metal-distance and sensitivity calibrations for both angle and normal-beam inspection setups.
A block that closely resembles the miniature angle-beam block and is used in a similar way is the DSC AWS Block. This block is used to determine the beam exit point and refracted angle of angle-beam transducers and to calibrate distance and set the sensitivity for both normal and angle beam inspection setups. Instructions on using the DSC block can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
The DC AWS Block is a metal path distance and beam exit point calibration standard that conforms to the requirements of the American Welding Society (AWS) and the American Association of State Highway and Transportation Officials (AASHTO). Instructions on using the DC block can be found in the annex of American Society for Testing and Materials Standard E164, Standard Practice for Ultrasonic Contact Examination of Weldments.
The RC Block is used to determine the resolution of angle beam transducers per the requirements of AWS and AASHTO. Engraved Index markers are provided for 45, 60, and 70 degree refracted angle beams.
- Step and Tapered Calibration Wedges
Step and tapered calibration wedges come in a large variety of sizes and configurations. Step wedges are typically manufactured with four or five steps but custom wedge can be obtained with any number of steps. Tapered wedges have a constant taper over the desired thickness range.
The DS test block is a calibration standard used to check the horizontal linearity and the dB accuracy per requirements of AWS and AASHTO.
- Distance/Area-Amplitude Blocks
Distance/area amplitude correction blocks typically are purchased as a ten-block set, as shown above. Aluminum sets are manufactured per the requirements of ASTM E127 and steel sets per ASTM E428. Sets can also be purchased in titanium. Each block contains a single flat-bottomed, plugged hole. The hole sizes and metal path distances are as follows:
3/64″ at 3″
5/64″ at 1/8″, 1/4″, 1/2″, 3/4″, 11/2″, 3″, and 6″
8/64″ at 3″ and 6″
Sets are commonly sold in 4340 Vacuum melt Steel, 7075-T6 Aluminum, and Type 304 Corrosion Resistant Steel. Aluminum blocks are fabricated per the requirements of ASTM E127, Standard Practice for Fabricating and Checking Aluminum Alloy Ultrasonic Standard Reference Blocks. Steel blocks are fabricated per the requirements of ASTM E428, Standard Practice for Fabrication and Control of Steel Reference Blocks Used in Ultrasonic Inspection.
➤ Related Article: Types of Ultrasonic Calibration Block
➤ Related Article: Ultrasonic Transducer Types |
Main points from Chapter 10 01. In the United States, money consists primarily of currency in circulation (one-third of all money) and demand deposits in banks (two-thirds of all money). 02. Currency is produced by the federal government and is distributed through the economy by the banking system to satisfy the needs of businesses and households. 03. Demand deposits are checking accounts in banks and other financial institutions. Changes in the money supply result primarily from changes in the amount of demand deposits. 04. The usual measure of the money supply is M1, the total of coins and currency in circulation, demand deposits, and travelers' checks. 05. Near money consists of financial assets that are less liquid than money. These include: savings deposits, certificates of deposit, and shares in money- market funds. 06. In addition to MI, other measures of the money supply, such as M2, M3, and L, can be defined by adding different kinds of near monies to MI. 07. Money serves three basic functions: it serves as a medium of exchange, a unit of measurement, and a store of value. 08. Money is created and the money supply increases when individuals, businesses, and governments borrow money. The money supply decreases (or fails to increase) when loans are repaid and/or no new loans are being made. 09. Banks must hold a required percentage of their deposits as required reserves. Any deposits in excess of the required reserves are excess reserves. Excess reserves represent the amount of funds that banks have to lend. 10. The money supply in the United States is controlled by our central bank, called the Federal Reserve System, or simply the "Fed." The Fed controls the money supply by controlling the excess reserves available to banks. 11. The tools that the Fed uses to control the money supply which is called Monetary Policy include: changing the level of required reserves, changing the discount rate, and open market operations. 12. Required Reserve Ratio is when reserves of monies to total deposits increases, banks will have less excess reserves to loan and the money supply will be unable to increase. When the ratio decreases, banks will have more money to loan and the money supply will increase. 13. The Discount Rate is the interest rate that regular banks are charged when they borrow money from the Fed. Lower rates result in more money being borrowed by banks and re-lent to businesses and individuals. This increases the money supply. 14. Open-Market Operations refer to the Fed buying and selling existing government bonds from banks and individuals. When the Fed buys bonds it takes bonds out of the economy and replaces them with new reserves that can be loaned. When the Fed sells bonds, it takes potentially loanable funds out of the economy and replaces them with nonloanable bonds. 15. The price of money is the interest rate. The quoted interest rate, the one that people and businesses earn and pay, is called the nominal interest rate because it includes an inflation premium. The interest rate that is adjusted for inflation is called the real interest rate. |
Back To CourseAP US History: Tutoring Solution
29 chapters | 361 lessons
As a member, you'll also get unlimited access to over 75,000 lessons in math, English, science, history, and more. Plus, get practice tests, quizzes, and personalized coaching to help you succeed.Try it risk-free
Michael teaches high school Social Studies and has a M.S. in Sports Management.
The Creeks did not call themselves Creeks until Europeans began using that particular term to refer to that Indian tribe. The Creeks had referred to themselves as 'Muskogee' or 'Ocmulgee' Indians. Europeans began calling these people 'Creeks' in the early 1700s and that became their name, although whites were simply referring to the 'Indians living on Ochese Creek' near present-day Macon, Georgia.
Creek Indians were not nomadic, but rather lived in large towns. These towns would be almost completely self-sufficient, having their own governments, materials and land. When the town grew large enough, the Indians would split it, and about half the town would go build a new town several miles away. They lived in wooden homes with thatched roofs made of sticks and long, thick-bladed grasses.
The Creek Indian Tribe is still very young when compared to other Native American tribes. There was no such tribe prior to the 1700's. Instead, these Indians lived in very large chiefdoms. A chiefdom is a group of tribes living together under one leader. These chiefdoms were very agricultural and were mound-building communities. Sometime around 1500, the chiefdoms collapsed, and the Creeks split into much smaller groups. The reason for the collapse and split is unknown.
Some of the most famous mound-building Native Americans were found in Mexico, which is where the Creek Indians originally migrated from. The Creeks' mounds were flat-topped pyramids that would rise up to 50 feet in height, and many were larger, in terms of ground area covered, than the Pyramids of Egypt. Mounds were typically used to communicate by smoke, hold festivals and celebrations, religious worship, and for other reasons. However, these mounds were not quite as impressive as the Egyptian Pyramids to the Europeans.
In the 1500s, Spanish explorers brought smallpox to the 'Creeks'. It is estimated that 90% of their population was killed by smallpox prior to 1700. By 1715, when they started becoming known as Creeks, their population was down to around 10,000. Nevertheless, the Creeks wanted many European goods and were willing to trade with whites.
As early as 1650, the Creeks were trading with the English. When South Carolina was officially established in 1670, the Creeks made a large profit selling slaves to the settlers there. The Creeks would capture Indians from present-day Florida and take them to market in South Carolina. However, the culture in South Carolina shifted away from using Indian slaves toward using slaves from Africa.
To continue the flow of goods into the Creek camps, they continued trading deerskins and other furs. These furs would be shipped to England and made into clothing or other goods. In return, the Creeks received cloth, guns, iron kettles, and rum. These items improved the Creeks' standard of living, but caused conflict within the tribe and with whites.
The Creeks were able to mostly stay out of the American Revolution. However, after the Revolution, the tribe was faced with difficult times. Decreased demand for deerskins and decreased supply of white-tailed deer hurt their economy. The newly-created state of Georgia also pressured the Creeks to turn some of their lands over for plantations. The Creeks gave up parts of their land in the 1790 Treaty of New York, the 1802 Treaty of Fort Wilkinson, and the 1805 Treaty of Washington.
The U.S. government also attempted to turn the Creeks into 'useful' economic tools. The program aimed to assist Creeks in learning to work and own large ranches and plantations. Some Creeks embraced this new lifestyle, while others were angered by it and would eventually lead to a civil war.
Some consider the Creek Civil War an extension of the War of 1812 between England and the U.S. Some Creeks backed the U.S. during the war, although the traditionalist group backed the English. The issues over how to deal with whites got so bad, that in 1813, the Creek Civil War began. Traditionalist Creeks were known as Red Sticks, due to the red sticks carried by the medicine men of the tribe. As such, the war has also been referred to as the 'Red Stick War.' The Red Sticks first battled the pro-American Creeks, and the Georgia militia, at the Battle of Burnt Corn Creek in July 1813. The Red Sticks scattered after a surprise attack, but regrouped and drove the militia and other Creeks away.
In late August 1813, the Red Sticks attacked Fort Mims in Alabama. Here they killed 250 troops, Native Americans, and civilians and took another 100 captives. In March of 1814, General Andrew Jackson led a large number of troops and friendly Creeks against the Red Sticks at Horseshoe Bend, Alabama. Jackson and his men killed over 800 Red Sticks, which all but ended the entire war. In August 1814, the war officially ended with the signing of the Treaty of Fort Jackson.
The Treaty of Fort Jackson forced the Creeks to give up 22 million acres of their land to the U.S. government. Andrew Jackson forced all Creeks to give up their land, even those who had fought with him during the Creek Civil War. Georgia representatives paid Creek leader William McIntosh to sign all remaining Creek land over, but the U.S. government would not recognize that treaty. All Creeks later signed the Treaty of Washington, officially giving up all Georgia lands to the government.
There were only about 20,000 Creeks living in Alabama by 1830, and most of those had moved from Georgia after the Treaty of Washington. In 1832, the remaining Creeks agreed to move to Indian Territory in present-day Oklahoma. When whites learned of the soon-availability of that land, they began accusing the Creeks of attacks on them. The U.S. brought troops in and forced the Creeks to walk along the Trail of Tears in 1836.
The Creeks refer to themselves as the 'Muskogee' or 'Ocmulgee' Indians. Creeks was the name given to them by Europeans and that became their name. They originally lived under large agricultural chiefdoms and built larger mounds, which were flat topped pyramids that would rise up to 50 feet in height and would typically use to communicate by smoke and hold festivals and celebrations. The Creeks eventually lived in self-sufficient towns with their own governments, materials and land. They lived in wooden homes with thatched roofs made of sticks and long thick-bladed grasses.
In the 1500s, Spanish explorers brought smallpox to the Creeks, and an estimated 90% of their population died of the disease prior to 1700. By 1715, their population was down to around 10,000. However as early as 1650, the Creeks were trading with the English. When South Carolina was officially established in 1617, the Creeks made a large profit selling slaves and trading deerskins and other furs. These furs would be shipped to England and made into clothing and other goods. In return, the Creeks received cloth, guns, iron kettles and rum. However, decreased demand for deerskins hurt the Creek economy.
The newly- created state of Georgia also pressured the Creeks to turn over some of their lands. They gave up parts of their land in the Treaties of New York, Fort Wilkinson, and Washington, which, due to in fighting over white influence and concession, would eventually lead to the Creek Civil War in 1813 between traditionalist Creeks known as Red Sticks and pro-U.S. Creeks. In August 1814, the war officially ended with the signing of the Treaty of Ft. Jackson. The Treaty of Ft. Jackson forced the Creeks to give up 22 million acres of their land. Georgia representatives paid Creek leader William McIntosh to sign all remaining Creek land over but the U.S. government would not recognize that treaty. All Creeks later signed the Treaty of Washington officially giving up all their lands.
In 1832, the Creeks agreed to move to Indian Territory in present day Oklahoma. When whites learned of the soon availability of that land, they began accusing the Creeks of attacking them. The U.S. brought in troops and forced the Creeks to walk along the Trail of Tears in 1836.
To unlock this lesson you must be a Study.com Member.
Create your account
Already a member? Log InBack
Did you know… We have over 160 college courses that prepare you to earn credit by exam that is accepted by over 1,500 colleges and universities. You can test out of the first two years of college and save thousands off your degree. Anyone can earn credit-by-exam regardless of age or education level.
To learn more, visit our Earning Credit Page
Not sure what college you want to attend yet? Study.com has thousands of articles about every imaginable degree, area of study and career path that can help you find the school that's right for you.
Back To CourseAP US History: Tutoring Solution
29 chapters | 361 lessons |
I INTRODUCTION TO LIGHT
Light, form of energy visible to the human eye that is radiated by moving charged particles. Light from the sun provides the energy needed for plant growth and plants convert the energy in sunlight into storable chemical form through a process called photosynthesis. Petroleum, coal, and natural gas are the remains of plants that lived millions of years ago, and the energy these fuels release when they burn is the chemical energy converted from sunlight. When animals digest the plants and animals they eat, they also release energy stored by photosynthesis. Scientists have learned through experimentation that light behaves like a particle at times, and like a wave at other times. The particle like features are called photons. Photons are different from particles of matter in that they have no mass and always move at the constant speed of 300,000 km/sec (186,000 mi/sec). When light diffracts, or bends slightly as it passes around a corner, it shows wavelike behavior. The waves associated with light are called electromagnetic waves because they consist of changing electric and magnetic fields.
II THE NATURE OF LIGHT
To understand the nature of light and how it is normally created, it is necessary to study matter at its atomic level. Atoms are the building blocks of matter, and the motion of one of their constituents, the electron, leads to the emission of light in most sources.
| Back to Top |
A Light Emission
Light can be emitted, or radiated, by electrons circling the nucleus of their atom. Electrons can circle atoms only in certain patterns called orbitals, and electrons have a specific amount of energy in each orbital. The amount of energy needed for each orbital is called an energy level of the atom. Electrons that circle close to the nucleus have less energy than electrons in orbitals farther from the nucleus. If the electron is in the lowest energy level, then no radiation occurs despite the motion of the electron. If an electron in a lower energy level gains some energy, it must jump to a higher level, and the atom is said to be excited. The motion of the excited electron causes it to lose energy, and it falls back to a l ower level. The energy the electron releases is equal to the difference between the higher and lower energy levels. The electron may emit this quantum of energy in the form of a photon. Each atom has a unique set of energy levels, and the energies of the corresponding photons it can emit make up what is called the atom's spectrum. This spectrum is like a fingerprint by which the atom can be identified. The process of identifying a substance from its spectrum is called spectroscopy. The laws that describe the orbitals and energy levels of atoms are the laws of quantum theory. They were invented in the 1920s specifically to account for the radiation of light and the sizes of atoms.
B Electromagnetic Waves
The waves that accompany light are made up of oscillating, or vibrating, electric and magnetic fields, which are force fields that surround charged particles and influence other charged particles in their vicinity. These electric and magnetic fields change strength and direction at right angles, or perpendicularly, to each other in a plane (vertically and horizontally for instance). The electromagnetic wave formed by these fields travels in a direction perpendicular to the field's strength (coming out of the plane). The relationship between the fields and the wave formed can be understood by imagining a wave in a taut rope. Grasping the rope and moving it up and down simulates the action of a moving charge upon the electric field. It creates a wave that travels along the rope in a direction that is perpendicular to the initial up and down movement. Because electromagnetic waves are transverse-that is, the vibration that creates them is perpendicular to the direction in which they travel, they are similar to waves on a rope or waves traveling on the surface of water. Unlike these waves, however, which require a rope or water, light does not need a medium, or substance, through which to travel. Light from the sun and distant stars reaches the earth by traveling through the vacuum of space. The waves associated with natural sources of light are irregular, like the water waves in a busy harbor. Scientists think of such waves as being made up of many smooth waves, where the motion is regular and the wave stretches out indefinitely with regularly spaced peaks and valleys. Such regular waves are called monochromatic because they correspond to a single color of light.
| Back to Top |
B1Wavelength, Frequency, and Amplitude
The wavelength of a monochromatic wave is the distance between two consecutive wave peaks. Wavelengths of visible light can be measured in meters or in nanometers (nm), which are one billionth of a meter (or about 0.4 ten-millionths of an inch). Frequency corresponds to the number of wavelengths that pass by a certain point in space in a given amount of time. This value is usually measured in cycles per second, or Hertz (Hz). All electromagnetic waves travel at the same speed, so in one second, more short waves will pass by a point in space than will long waves. This means that shorter waves have a higher frequency than longer waves. The relationship between wavelength, speed, and frequency is expressed by the equation: wave speed equals wavelength times frequency, or
c = lf
where c is the speed of a light wave in m/sec (3x108 m/sec in a vacuum), l is the wavelength in meters, and f is the wave's frequency in Hz.
The amplitude of an electromagnetic wave is the height of the wave, measured from a point midway between a peak and a trough to the peak of the wave. This height corresponds to the maximum strength of the electric and magnetic fields and to the number of photons in the light.
B2 Electromagnetic Spectrum
The electromagnetic spectrum refers to the entire range of frequencies or wavelengths of electromagnetic waves (see Electromagnetic Radiation). Light traditionally refers to the range of frequencies that can be seen by humans. The frequencies of these waves are very high, about one-half to three-quarters of a million billion (5 x 1014 to 7.5 x 1014) Hz. Their wavelengths range from 400 to 700 nm. X rays have wavelengths ranging from several thousandths of a nanometer to several nanometers, and radio waves have wavelengths ranging from several meters to several thousand meters.
Waves with frequencies a little lower than the range of human vision (and with wavelengths correspondingly longer) are called infrared. Waves with frequencies a little higher and wavelengths shorter than human eyes can see are called ultraviolet. About half the energy of sunlight at the earth's surface is visible electromagnetic waves, about 3 percent is ultraviolet, and the rest is infrared.
Each different frequency or wavelength of visible light causes our eye to see a slightly different color. The longest wavelength we can see is deep red at about 700 nm. The shortest wavelength humans can detect is deep blue or violet at about 400 nm. Most light sources do not radiate monochromatic light. What we call white light, such as light from the sun, is a mixture of all the colors in the visible spectrum, with some represented more strongly than others. Human eyes respond best to green light at 550 nm, which is also approximately the brightest color in sunlight at the earth's surface.
| Back to Top |
Polarization refers to the direction of the electric field in an electromagnetic wave. A wave whose electric field is oscillating in the vertical direction is said to be polarized in the vertical direction. The photons of such a wave would interact with matter differently than the photons of a wave polarized in the horizontal direction. The electric field in light waves from the sun vibrates in all directions, so direct sunlight is called unpolarized. Sunlight reflected from a surface is partially polarized parallel to the surface. Polaroid sunglasses block light that is horizontally polarized and therefore reduce glare from sunlight reflecting off horizontal surfaces.
Photons may be described as packets of light energy, and scientists use this concept to refer to the particle like aspect of light. Photons are unlike conventional particles, such as specks of dust or marbles, however, in that they are not limited to a specific volume in space or time. Photons are always associated with an electromagnetic wave of a definite frequency. In 1900 the German physicist Max Planck discovered that light energy is carried by photons. He found that the energy of a photon is equal to the frequency of its electromagnetic wave multiplied by a constant called h, or Planck's constant. This constant is very small because one photon carries little energy. Using the watt-second, or Joule, as the unit of energy, Planck's constant is 6.626 x 10-20 (a decimal point followed by 19 zeros and then the number 6626) Joule-seconds in exponential notation. The energy consumed by a one watt light bulb in one second, for example, is equivalent to two and a half million trillion photons of green light. Sunlight warms one square meter at the top of the earth's atmosphere at noon at the equator with the equivalent of about 14 100-watt lightbulbs. Light waves from the sun, therefore, produce a very large number of photons.
D Sources of Light
Sources of light differ in how they provide energy to the charged particles, such as electrons, whose motion creates the light. If the energy comes from heat, then the source is called incandescent. If the energy comes from another source, such as chemical or electrical energy, the source is called luminescent (see Luminescence).
| Back to Top |
In an incandescent light source, hot atoms collide with each other. These collisions transfer energy to some electrons, boosting them into higher energy levels. As the electrons release this energy, they emit photons. Some collisions are weak and some are strong, so the electrons are excited to different energy levels and photons of different energies are emitted. Candle light is incandescent and results from the excited atoms of soot in the hot flame. Light from an incandescent light bulb comes from excited atoms in a thin wire called a filament that is heated by passing an electric current through it.
The sun is an incandescent light source, and its heat comes from nuclear reactions deep below its surface. As the nuclei of atoms interact and combine in a process called nuclear fusion, they release huge amounts of energy. This energy passes from atom to atom until it reaches the surface of the sun, where the temperature is about 6000° C (11,000° F). Different stars emit incandescent light of different frequencies-and therefore color-depending on their mass and their age.
All thermal, or heat, sources have a broad spectrum, which means they emit photons with a wide range of energies. The color of incandescent sources is related to their temperature, with hotter sources having more blue in their spectra, or ranges of photon energies, and cooler sources more red. About 75 percent of the radiation from an incandescent light bulb is infrared. Scientists learn about the properties of real incandescent light sources by comparing them to a theoretical incandescent light source called a black body. A black body is an ideal incandescent light source, with an emission spectrum that does not depend on what material the light comes from, but only its temperature.
A luminescent light source absorbs energy in some form other than heat, and is therefore usually cooler than an incandescent source. The color of a luminescent source is not related to its temperature. A fluorescent light is a type of luminescent source that makes use of chemical compounds called phosphors. Fluorescent light tubes are filled with mercury vapor and coated on the inside with phosphors. As electricity passes through the tube, it excites the mercury atoms and makes them emit blue, green, violet, and ultraviolet light. The electrons in phosphor atoms absorb the ultraviolet radiation, then release some energy to heat before emitting visible light with a lower frequency.
| Back to Top |
Phosphor compounds are also used to convert electron energy to light in a television picture tube. Beams of electrons in the tube collide with phosphor atoms in small dots on the screen, exciting the phosphor electrons to higher energy levels. As the electrons drop back to their original energy level, they emit some heat and visible light. The light from all the phosphor dots combines to form the picture.
In certain phosphor compounds, atoms remain excited for a long time before radiating light. A light source is called phosphorescent if the delay between energy absorption and emission is longer than one second. Phosphorescent materials can glow in the dark for several minutes after they have been exposed to strong light.
The aurora borealis and aurora australis (northern and southern lights) in the night sky in high latitudes are luminescent sources. Electrons in the solar wind that sweeps out from the sun become deflected in the earth's magnetic field and dip into the upper atmosphere near the north and south magnetic poles. The electrons then collide with atmospheric molecules, exciting the molecules' electrons and making them emit light in the sky.
Chemiluminescence occurs when a chemical reaction produces molecules with electrons in excited energy levels that can then radiate light. The color of the light depends on the chemical reaction. When chemiluminescence occurs in plants or animals it is called bioluminescence. Many creatures, from bacteria to fish, make light this way by manufacturing substances called luciferase and luciferin. Luciferase helps luciferin combine with oxygen, and the resulting reaction creates excited molecules that emit light. Fireflies use flashes of light to attract mates, and some fish use bioluminescence to attract prey, or confuse predators.
D3 Synchrotron Radiation
Not all light comes from atoms. In a synchrotron light source, electrons are accelerated by microwaves and kept in a circular orbit by large magnets. The whole machine, called a synchrotron, resembles a large artificial atom. The circulating electrons can be made to radiate very monochromatic light at a wide range of frequencies.
A laser is a special kind of light source that produces very regular waves that permit the light to be very tightly focused. Laser is actually an acronym for Light Amplification by Stimulated Emission of Radiation. Each radiating charge in a non-laser light source produces a light wave that may be a little different from the waves produced by the other charges. Laser sources have atoms whose electrons radiate all in step, or synchronously. As a result, the electrons produce light that is polarized, monochromatic, and coherent, which means that its waves remain in step, with their peaks and troughs coinciding, over long distances.
This coherence is made possible by the phenomenon of stimulated emission. If an atom is immersed in a light wave with a frequency, polarization, and direction the same as light that the atom could emit, then the radiation already present stimulates the atom to emit more of the same, rather than emit a slightly different wave. So the existing light is amplified by the addition of one more photon from the atom. A luminescent light source can provide the initial amplification, and mirrors are used to continue the amplification.
Lasers have many applications in medicine, scientific research, military technology, and communications. They provide a very focused, powerful, and controllable energy source that can be used to perform delicate tasks. Laser light can be used to drill holes in diamonds and to make microelectronic components. The precision of lasers helps doctors perform surgery without damaging the surrounding tissue. Lasers are useful for space communications because laser light can carry a great deal of information and travel long distances without losing signal strength.
| Back to Top |
E Detection of Light
For each way of producing light there is a corresponding way of detecting it. Just as heat produces incandescent light, for example, light produces measurable heat when it is absorbed by a material.
E1 Photoelectric Effect
The photoelectric effect is a process in which an atom absorbs a photon that has so much energy that the photon sets one of the atom's electrons free to move outside the atom. Part of the photon's energy goes toward releasing the electron from the atom. This energy is called the activation energy of the electron. The rest of the photon's energy is transferred to the released electron in the form of motion, or kinetic energy. Since the photon energy is proportional to frequency, the released electron, or photoelectron, moves faster when it has absorbed high-frequency light.
Metals with low activation energies are used to make photodetectors and photoelectric cells whose electrical properties change in the presence of light. Solar cells use the photoelectric effect to convert sunlight into electricity. Solar cells are used in place of electric batteries in remote applications like space satellites or roadside emergency telephones (see Solar Energy). Hand-held calculators and watches often use solar cells so that battery replacement is unnecessary.
E2 Photochemical Detection
The change induced in photographic film exposed to light is an example of photochemical detection of photons. Light induces a chemical change in photosensitive chemicals on film. The film is then processed to convert the chemical change into a permanent image and to remove the photosensitive chemicals from the film so it will not continue to change when it is viewed in full light.
Human vision works on a similar principle. Light of different frequencies causes different chemical changes in the eye. The chemical action generates nerve impulses that our brains interpret as color, shape, and location of objects.
III BEHAVIOR OF LIGHT
Light behavior can be divided into two categories: How light interacts with matter and how light travels, or propagates through space or through transparent materials. The propagation of light has much in common with the propagation of other kinds of waves, including sound waves and water waves.
| Back to Top |
A Interaction with Material
When light strikes a material, it interacts with the atoms in the material, and the corresponding effects depend on the frequency of the light and the atomic structure of the material. In transparent materials, the electrons in the material oscillate, or vibrate, while the light is present. This oscillation momentarily takes energy away from the light and then puts it back again. The result is to slow down the light wave without leaving energy behind. Denser materials generally slow the light more than less dense materials, but the effect also depends on the frequency or wavelength of the light.
Materials that are not completely transparent either absorb light or reflect it. In absorbing materials, such as dark colored cloth, the energy of the oscillating electrons does not go back to the light. The energy instead goes toward increasing the motion of the atoms, which causes the material to heat up. The atoms in reflective materials, such as metals, re-radiate light that cancels out the original wave. Only the light re-radiated back out of the material is observed. All materials exhibit some degree of absorption, refraction, and reflection of light. The study of the behavior of light in materials and how to use this behavior to control light is called optics.
Refraction is the bending of light when it passes from one kind of material into another. Because light travels at a different speed in different materials, it must change speeds at the boundary between two materials. If a beam of light hits this boundary at an angle, then light on the side of the beam that hits first will be forced to slow down or speed up before light on the other side hits the new material. This makes the beam bend, or refract, at the boundary. Light bouncing off an object underwater, for instance, travels first through the water and then through the air to reach an observer's eye. From certain angles an object that is partially submerged appears bent where it enters the water because light from the part underwater is being refracted.
The refractive index of a material is the ratio of the speed of light in empty space to the speed of light inside the material. Because light of different frequencies travels at different speeds in a material, the refractive index is different for different frequencies. This means that light of different colors is bent by different angles as it passes from one material into another. This effect produces the familiar colorful spectrum seen when sunlight passes through a glass prism. The angle of bending at a boundary between two transparent materials is related to the refractive indexes of the materials through Snell's Law, a mathematical formula that is used to design lenses and other optical devices to control light.
Reflection also occurs when light hits the boundary between two materials. Some of the light hitting the boundary will be reflected into the first material. If light strikes the boundary at an angle, the light is reflected at the same angle, similar to the way balls bounce when they hit the floor. Light that is reflected from a flat boundary, such as the boundary between air and a smooth lake, will form a mirror image. Light reflected from a curved surface may be focused into a point, a line, or onto an area, depending on the curvature of the surface.
Scattering occurs when the atoms of a transparent material are not smoothly distributed over distances greater than the length of a light wave, but are bunched up into lumps of molecules or particles. The sky is bright because molecules and particles in the air scatter sunlight. Light with higher frequencies and shorter wavelengths is scattered more than light with lower frequencies and longer wavelengths. The atmosphere scatters violet light the most, but human eyes do not see this color, or frequency, well. The eye responds well to blue, though, which is the next most scattered color. Sunsets look red because when the sun is at the horizon, sunlight has to travel through a longer distance of atmosphere to reach the eye. The thick layer of air, dust and haze scatters away much of the blue. The spectrum of light scattered from small impurities within materials carries important information about the impurities. Scientists measure light scattered by the atmospheres of other planets in the solar system to learn about the chemical composition of the atmospheres.
| Back to Top |
B How Light Travels
The first successful theory of light wave motion in three dimensions was proposed by the Dutch scientist Christiaan Huygens in 1678. Huygens suggested that light wave peaks form surfaces like the layers of an onion. In a vacuum, or a uniform material, the surfaces are spherical. These wave surfaces advance, or spread out, through space at the speed of light. Huygens also suggested that each point on a wave surface can act like a new source of smaller spherical waves, which may be called wavelets, that are in step with the wave at that point. The envelope of all the wavelets is a wave surface. An envelope is a curve or surface that touches a whole family of other curves or surfaces like the wavelets. This construction explains how light seems to spread away from a pinhole rather than going in one straight line through the hole. The same effect blurs the edges of shadows. Huygens's principle, with minor modifications, accurately describes all forms of wave motion.
Interference in waves occurs when two waves overlap. If a peak of one wave is aligned with the peak of the second wave, then the two waves will produce a larger wave with a peak that is the sum of the two overlapping peaks. This is called constructive interference.
If a peak of one wave is aligned with a trough of the other, then the waves will tend to cancel each other out and they will produce a smaller wave or no wave at all. This is called destructive interference. In 1803 the English scientist Thomas Young studied interference of light waves by letting light pass through a screen with two slits. In this configuration, the light from each slit spreads out according to Huygens' principle and eventually overlaps with light from the other slit. If a screen is set up in the region where the two waves overlap, a point on the screen will be light or dark depending on whether the two waves interfere constructively or destructively. If the difference between the distance from one slit to a point on the screen and the other slit to the same point on the screen is an exact number of wavelengths, then light waves arriving at that point will be in step and constructively interfere, making the point bright. If the difference is an exact odd number of half wavelengths, then light waves will arrive out of step, with one wave's trough arriving at the same time as another wave's peak. The waves will destructively interfere, making the point dark. The resulting pattern is a series of parallel bright and dark lines on the screen.
Instruments called interferometers use various arrangements of reflectors to produce two beams of light, which are allowed to interfere. These instruments can be used to measure tiny differences in distance or in the speed of light in one of the beams by observing the interference pattern produced by the two beams.
Holography is another application of interference. A hologram is made by splitting a light wave in two with a partially reflecting mirror. One part of the light wave travels through the mirror and is sent directly to a photographic plate. The other part of the wave is reflected first toward a subject, a face for example, and then toward the plate. The resulting photograph is a hologram. Instead of being an image of the face, it is an image of the interference pattern between the two beams. A normal photograph only records the light and dark features of the face and ignores the positions of peaks and troughs of the light wave that form the interference pattern. Since the full light wave is restored when a hologram is illuminated, the viewer can see whatever the original wave contained, including the three dimensional quality of the original face.
| Back to Top |
Diffraction is the spreading of light waves as they pass through a small opening or around a boundary. Young's principle of interference can be applied to Huygens's explanation of diffraction to explain fringe patterns in diffracted light. As a beam of light emerges from a slit in an illuminated screen, the light some distance away from the screen will consist of overlapping wavelets from different points of the light wave in the opening of the slit. When the light strikes a spot on a display screen across from the slit, these points are at different distances from the spot, so their wavelets can interfere and lead to a pattern of light and dark regions. The pattern produced by light from a single slit will not be as pronounced as a pattern from two slits. This is because there are an infinite number of interfering waves, one from each point emerging from the slit, and their interference patterns overlap each other.
IV MEASURING LIGHT
Monochromatic light, or light of one color, has several characteristics that can be measured. As discussed in the section on electromagnetic waves, the length of light waves is measured in meters, and the frequency of light waves is measured in cycles per second, or Hertz. The wavelength can be measured with interferometers, and the frequency determined from the wavelength and a measurement of the velocity of light in meters per second. Monochromatic light also has a well defined polarization that can be measured using devices called polarimeters. Sometimes the direction of scattered light is also an important quantity to measure.
When light is considered as a source of illumination for human eyes, its intensity, or brightness, is measured in units that are based on a modernized version of the perceived brightness of a candle. These units include the rate of energy flow in light, which, for monochromatic light traveling in a single direction, is determined by the rate of flow of photons. The rate of energy flow in this case can be stated in watts, or Joules per second. Usually light contains many colors and radiates in many directions away from a source such as a lamp.
Scientists use the units candela and lumen to measure the brightness of light as perceived by humans. These units account for the different response of the eye to light of different colors. The lumen measures the total amount of energy in the light radiated in all directions, and the candela measures the amount radiated in a particular direction. The candela was originally called the candle, and it was defined in terms of the light produced by a standard candle. It is now defined as the energy flow in a given direction of a yellow-green light with a frequency of 540 x 1012 Hz and a radiant intensity, or energy output, of 1/683 watt into the opening of a cone of one steradian. The steradian is a measure of angle in three dimensions.
The lumen can be defined in terms of a source that radiates one candela uniformly in all directions. If a sphere with a radius of one foot were centered on the light source, then one square foot of the inside surface of the sphere would be illuminated with a flux of one lumen. Flux means the rate at which light energy is falling on the surface. The illumination, or luminance, of that one square foot is defined to be one foot-candle.
The illumination at a different distance from a source can be calculated from the inverse square law: One lumen of flux spreads out over an area that increases as the square of the distance from the center of the source. This means that the light per square foot decreases as the inverse square of the distance from the source. For instance, if 1 square foot of a surface that is 1 foot away from a source has an illumination of 1 foot-candle, then 1 square foot of a surface that is 4 feet away will have an illumination of 1/16 foot-candle. This is because 4 feet away from the source, the 1 lumen of flux landing on 1 square foot has had to spread out over 16 square feet. In the metric system, the unit of luminous flux is also called the lumen, and the unit of illumination is defined in meters and is called the lux.
| Back to Top |
B The Speed of Light
Scientists have defined the speed of light to be exactly 299,792,458 meters per second (about 186,000 miles per second). This definition is possible because since 1983, scientists have known the distance light travels in one second more accurately than the definition of the standard meter. Therefore, in 1983, scientists defined the meter as 1/299,792,458 the distance light travels in one second. This precise measurement is the latest step in a long history of measurement, beginning in the early 1600s with an unsuccessful attempt by Italian scientist Galileo to measure the speed of lantern light from one hilltop to another.
The first successful measurements of the speed of light were astronomical. In 1676 the Danish astronomer Olaus Roemer noticed a delay in the eclipse of a moon of Jupiter when it was viewed from the far side as compared with the near side of earth's orbit. Assuming the delay was the travel time of light across the earth's orbit, and knowing roughly the orbital size from other observations, he divided distance by time to estimate the speed.
English physicist James Bradley obtained a better measurement in 1729. Bradley found it necessary to keep changing the tilt of his telescope to catch the light from stars as the earth went around the sun. He concluded that the earth's motion was sweeping the telescope sideways relative to the light that was coming down the telescope. The angle of tilt, called the stellar aberration, is approximately the ratio of the orbital speed of the earth to the speed of light. (This is one of the ways scientists determined that the earth moves around the sun and not vice versa.)
In the mid-19th century, French physicist Armand Fizeau directly measured the speed of light by sending a narrow beam of light between gear teeth in the edge of a rotating wheel. The beam then traveled a long distance to a mirror and came back to the wheel where, if the spin were fast enough, a tooth would block the light. Knowing the distance to the mirror and the speed of the wheel, Fizeau could calculate the speed of light. During the same period, the French physicist Jean Foucault made other, more accurate experiments of this sort with spinning mirrors.
Scientists needed accurate measurements of the speed of light because they were looking for the medium that light traveled in. They called the medium ether, which they believed waved to produce the light. If ether existed, then the speed of light should appear larger or smaller depending on whether the person measuring it was moving toward or away from the ether waves. However, all measurements of the speed of light in different moving reference frames gave the same value.
In 1887 the American physicists Albert A. Michelson and Edward Morley performed a very sensitive experiment designed to detect the effects of ether. They constructed an interferometer with two light beams-one that pointed along the direction of the earth's motion, and one that pointed in a direction perpendicular to the earth's motion. The beams were reflected by mirrors at the ends of their paths and returned to a common point where they could interfere. Along the first beam, the scientists expected the earth's motion to increase or decrease the beam's velocity so that the number of wave cycles throughout the path would be changed slightly relative to the second beam, resulting in a characteristic interference pattern. Knowing the velocity of the earth, it was possible to predict the change in the number of cycles and the resulting interference pattern that would be observed. The Michelson-Morley apparatus was fully capable of measuring it, but the scientists did not find the expected results.
The paradox of the constancy of the speed of light created a major problem for physical theory that German-born American physicist Albert Einstein finally resolved in 1905. Einstein suggested that physical theories should not depend on the state of motion of the observer. Instead, Einstein said the speed of light had to remain constant, and all the rest of physics had to be changed to be consistent with this fact. This special theory of relativity predicted many unexpected physical consequences, all of which have since been observed in nature.
| Back to Top |
V HISTORY OF LIGHT THEORIES
The earliest speculations about light were hindered by the lack of knowledge about how the eye works. The Greek philosophers from as early as Pythagoras, who lived during the 5th century BC, believed light issued forth from visible things, but most also thought vision, as distinct from light, proceeded outward from the eye. Plato gave a version of this theory in his dialogue Timaeus, written in the 3rd century BC, which greatly influenced later thought.
Some early ideas of the Greeks, however, were correct. The philosopher and statesman Empedocles believed that light travels with finite speed, and the philosopher and scientist Aristotle accurately explained the rainbow as a kind of reflection from raindrops. The Greek mathematician Euclid understood the law of reflection and the properties of mirrors. Early thinkers also observed and recorded the phenomenon of refraction, but they did not know its mathematical law. The mathematician and astronomer Ptolemy was the first person on record to collect experimental data on optics, but he too believed vision issued from the eye. His work was further developed by the Egyptian scientist Ibn al Haythen, who worked in Iraq and Egypt and was known to Europeans as Alhazen. Through logic and experimentation, Alhazen finally discounted Plato's theory that vision issued forth from the eye. In Europe, Alhazen was the most well known among a group of Islamic scholars who preserved and built upon the classical Greek tradition. His work influenced all later investigations on light.
A Early Scientific Theories
The early modern scientists Galileo, Johannes Kepler of Germany, and René Descartes of France all made contributions to the understanding of light. Descartes discussed optics and reported the law of refraction in his famous Discours de la méthode (Discourse on Method), published in 1637. The Dutch astronomer and mathematician Willebrord Snell independently discovered the law of refraction in 1620, and the law is now named after him.
During the late 1600s, an important question emerged: Is light a swarm of particles, or is it a wave in some pervasive medium through which ordinary matter freely moves? English physicist Sir Isaac Newton was a proponent of the particle theory, and Huygens developed the wave theory at about the same time. At the time it seemed that wave theories could not explain optical polarization because waves that scientists were familiar with moved parallel, not perpendicular, to the direction of wave travel. On the other hand, Newton had difficulty explaining the phenomenon of interference of light. His explanation forced a wavelike property on a particle description. Newton's great prestige coupled with the difficulty of explaining polarization caused the scientific community to favor the particle theory, even after English physicist Thomas Young analyzed a new class of interference phenomena using the wave theory in 1803.
The wave theory was finally accepted after French physicist Augustin Fresnel supported Young's ideas with mathematical calculations in 1815 and predicted surprising new effects. Irish mathematician Sir William Hamilton clarified the relationship between wave and particle viewpoints by developing a theory that unified optics and mechanics. Hamilton's theory was important in the later development of quantum mechanics.
Between the time of Newton and Fresnel, scientists developed mathematical techniques to describe wave phenomena in fluids and solids. Fresnel and his successors were able to use these advances to create a theory of transverse waves that would account for the phenomenon of optical polarization. As a result, an entire wave theory of light existed in mathematical form before the British physicist James Clerk Maxwell began his work on electromagnetism. In his theory of electromagnetism, Maxwell showed that electric and magnetic fields affect each other in such a way as to permit waves to travel through space. The equations he derived to describe these electromagnetic waves matched the equations scientists already knew to describe light. Maxwell's equations, however, were more general in that they described electromagnetic phenomena other than light and they predicted waves throughout the electromagnetic spectrum. In addition, his theory gave the correct speed of light in terms of the properties of electricity and magnetism. When the German physicist Gustav Hertz later detected electromagnetic waves at lower frequencies, which the theory predicted, the basic correctness of Maxwell's theory was confirmed.
Maxwell's work left unsolved a problem common to all wave theories of light. A wave is a continuous phenomenon, which means that when it travels, its electromagnetic field must move at each of the infinite number of points in every small part of space. When we add heat to any system to raise its temperature, the energy is shared equally among all the parts of the system that can move. When this idea is applied to light, with an infinite number of moving parts, it appears to require an infinite amount of heat to give all the parts equal energy. But thermal radiation, the process in which heated objects emit electromagnetic waves, occurs in nature with a finite amount of heat. Something that could account for this process was missing from Maxwell's theory. In 1900 Max Planck provided the missing concept. He proposed the existence of a light quantum, a finite packet of energy that became known as the photon.
| Back to Top |
B Modern Theory
Planck's theory remained mystifying until Einstein showed how it could be used to explain the photoelectric effect, in which the speed of ejected electrons was related not to the intensity of light, but to its frequency. This was consistent with Planck's theory, which suggested that a photon's energy was related to its frequency. During the next two decades scientists recast all of physics to be consistent with Planck's theory. The result was a picture of the physical world that was different from anything ever before imagined. Its essential feature is that all matter appears in physical measurements to be made of quantum bits, which are something like particles. Unlike the particles of Newtonian physics, however, a quantum particle cannot be viewed as having a definite path of movement that can be predicted through laws of motion. Quantum physics only permits the prediction of the probability of where particles may be found. The probability is the squared amplitude of a wave field, sometimes called the wave function associated with the particle. For photons the underlying probability field is what we know as the electromagnetic field. The current world view that scientists use, called the Standard Model, divides particles into two categories: fermions (building blocks of atoms, such as electrons, protons, and neutrons), which cannot exist in the same place at the same time, and bosons, such as photons, which can (see Elementary Particles). Bosons are the quantum particles associated with the force fields that act on the fermions. Just as the electromagnetic field is a combination of electric and magnetic force fields, there is an even more general field called the electroweak field. This field combines electromagnetic forces and the weak nuclear force. The photon is one of four bosons associated with this field. The other three bosons have large masses and decay, or break apart, quickly to lighter components outside the nucleus of the atom.
| Back to Top |
© 1998-2014 by The Hormone Shop, LLC.
This Site does not endorse any product advertised on
Disclaimer: The Hormone Shop LLC assumes no liability, whether under a theory of contract, tort, negligence, product liability or otherwise. In no event shall The Hormone Shop LLC be liable for any direct or indirect, consequential, incidental, special, punitive or exemplary damages, or for any loss incurred due to results or comments that are reported or the use of collection materials that are supplied, or any prescriptions regardless of whether The Hormone Shop LLC knew or should have known of the possibility of such damages. Furthermore, in no event shall The Hormone Shop LLC's total cumulative liability exceed The Hormone Shop LLC's net profit on any specific product, sample or consultation giving rise to the liability. The Hormone Shop LLC specifically assumes no liability incurred by any 3rd party associate and if you are reading this web site in a language other that English it has been machine translated by SYSTRAN who strives to achieve the highest possible accuracy, however no automated translation is perfect nor is it intended to replace human translators. Users should note that the quality of the source text significantly affects the translations and The Hormone Shop LLC assumes no liability for incorrect or misleading translations. The questions and comments appearing in the "Discussion Group Forum" are strictly from unknown or unidentified sources and the reader/participant should be aware that credentials from any source are completely lacking and should be questioned. The Hormone Shop LLC specifically assumes no liability for any comment or advice appearing in the "Discussion Forum".
Notice: This information on anti-aging
provided for educational and nutritional purposes. Any medical procedures,
dietary changes or the use of dietary supplements discussed herein should only
be undertaken on the advice of a qualified medical doctor. Although listed and
sold as dietary supplements these are not innocuous, inert substances; rather they can and
do affect vital systems within the human body and it is for this reason that you
are urged to find a medical doctor who will work with you in monitoring and
maintaining your well being. |
Heads of state of various countries:
In a parliamentary system
, such as India
, the head of state usually has mostly ceremonial powers, with a separate head of government.
However, in some parliamentary systems, like South Africa
, there is an executive president that is both head of state and head of government. Likewise, in some parliamentary systems the head of state is not the head of government, but still has significant powers, for example Morocco
. In contrast, a semi-presidential system
, such as France
, has both heads of state and government as the de facto
leaders of the nation (in practice they divide the leadership of the nation between themselves). Meanwhile, in presidential systems
, the head of state is also the head of government.
An independent nation state
normally has a head of state, and determines the extent of its head's executive powers of government or formal representational functions.
In terms of protocol
: the head of a sovereign
, independent state is usually identified as the person who, according to that state's constitution, is the reigning monarch
, in the case of a monarchy
; or the president, in the case of a republic
Among the state constitutions
(fundamental laws) that establish different political systems, four major types of heads of state can be distinguished:
- The parliamentary system, with two subset models;
- The standard model, in which the head of state, in theory, possesses key executive powers, but such power is exercised on the binding advice of a head of government (e.g. United Kingdom, India, Germany).
- The non-executive model, in which the head of state has either none or very limited executive powers, and mainly has a ceremonial and symbolic role (e.g. Sweden, Japan, Israel).
- The semi-presidential system, in which the head of state shares key executive powers with a head of government or cabinet (e.g. Russia, France, Sri Lanka); and
- The presidential system, in which the head of state is also the head of government and has all executive powers (e.g. United States, Indonesia, South Korea).
In a federal constituent or a dependent territory, the same role is fulfilled by the holder of an office corresponding to that of a head of state. For example, in each Canadian province
the role is fulfilled by the lieutenant governor
, whereas in most British Overseas Territories
the powers and duties are performed by the governor
. The same applies to Australian states
, Indian states
, etc. Hong Kong
's constitutional document, the Basic Law
, for example, specifies the chief executive
as the head of the special administrative region, in addition to their role as the head of government. These non-sovereign-state heads, nevertheless, have limited or no role in diplomatic affairs, depending on the status and the norms and practices of the territories concerned.
World's parliamentary states (as of 2021):
Republics with an executive president elected by a parliament
Parliamentary constitutional monarchies in which the monarch usually does not personally exercise power
Presidential republics, one-party states, and other forms of government
In parliamentary systems
the head of state may be merely the nominal chief executive officer
, heading the executive branch
of the state, and possessing limited executive power. In reality, however, following a process of constitutional evolution, powers are usually only exercised by direction of a cabinet
, presided over by a head of government
who is answerable to the legislature. This accountability and legitimacy requires that someone be chosen who has a majority support in the legislature
(or, at least, not a majority opposition – a subtle but important difference). It also gives the legislature the right to vote down the head of government
and their cabinet, forcing it either to resign or seek a parliamentary dissolution. The executive branch is thus said to be responsible
(or answerable) to the legislature, with the head of government and cabinet in turn accepting constitutional responsibility for offering constitutional advice
to the head of state.
In parliamentary constitutional monarchies
, the legitimacy of the unelected head of state typically derives from the tacit approval of the people via the elected representatives. Accordingly, at the time of the Glorious Revolution
, the English parliament
acted of its own authority to name a new king and queen (the joint monarchs Mary II
and William III
); likewise, Edward VIII
's abdication required the approval of each of the six independent realms of which he was monarch. In monarchies with a written constitution, the position of monarch is a creature of the constitution and could quite properly be abolished through a democratic procedure of constitutional amendment, although there are often significant procedural hurdles imposed on such a procedure (as in the Constitution of Spain
In republics with a parliamentary system (such as India, Germany, Austria, Italy and Israel), the head of state is usually titled president
and the principal functions of such presidents are mainly ceremonial and symbolic, as opposed to the presidents in a presidential or semi-presidential system.
In reality, numerous variants exist to the position of a head of state within a parliamentary system. The older the constitution, the more constitutional leeway tends to exist for a head of state to exercise greater powers over government, as many older parliamentary system constitutions in fact give heads of state powers and functions akin to presidential or semi-presidential systems, in some cases without containing reference to modern democratic principles of accountability to parliament or even to modern governmental offices. Usually, the king had the power of declaring war without previous consent of the parliament.
For example, under the 1848 constitution of the Kingdom of Italy
, the Statuto Albertino
—the parliamentary approval to the government appointed by the king—was customary, but not required by law. So, Italy had a de facto
parliamentary system, but a de jure
Examples of heads of state in parliamentary systems using greater powers than usual, either because of ambiguous constitutions or unprecedented national emergencies, include the decision by King Leopold III of the Belgians
to surrender on behalf of his state to the invading German army in 1940, against the will of his government. Judging that his responsibility to the nation by virtue of his coronation oath required him to act, he believed that his government's decision to fight rather than surrender was mistaken and would damage Belgium. (Leopold's decision proved highly controversial. After World War II
, Belgium voted in a referendum to allow him to resume his monarchical powers and duties, but because of the ongoing controversy he ultimately abdicated.) The Belgian constitutional crisis in 1990, when the head of state
refused to sign into law a bill permitting abortion, was resolved by the cabinet assuming the power to promulgate the law while he was treated as "unable to reign" for twenty-four hours.
These officials are excluded completely from the executive: they do not possess even theoretical executive powers or any role, even formal, within the government. Hence their states' governments are not referred to by the traditional parliamentary model head of state styles
of His/Her Majesty's Government
or His/Her Excellency's Government
. Within this general category, variants in terms of powers and functions may exist.
The Constitution of Japan
) was drawn up under the Allied occupation
that followed World War II
and was intended to replace the previous militaristic
and quasi-absolute monarchy
system with a form of liberal democracy parliamentary system
. The constitution explicitly vests all executive power in the Cabinet
, who is chaired by the prime minister
(articles 65 and 66) and responsible to the Diet
(articles 67 and 69). The emperor
is defined in the constitution as "the symbol of the State and of the unity of the people" (article 1), and is generally recognised throughout the world as the Japanese head of state. Although the emperor formally appoints
the prime minister to office, article 6 of the constitution requires him to appoint the candidate "as designated by the Diet", without any right to decline appointment. He is a ceremonial figurehead
with no independent discretionary powers related to the governance of Japan.
Since the passage in Sweden
of the 1974 Instrument of Government
, the Swedish monarch
no longer has many of the standard parliamentary system head of state functions that had previously belonged to him or her, as was the case in the preceding 1809 Instrument of Government
. Today, the speaker of the Riksdag
appoints (following a vote in the Riksdag
) the prime minister
and terminates his or her commission following a vote of no confidence
or voluntary resignation. Cabinet members are appointed and dismissed at the sole discretion of the prime minister. Laws and ordinances are promulgated by two Cabinet members in unison signing "On Behalf of the Government" and the government—not the monarch—is the high contracting party
with respect to international treaties. The remaining official functions of the sovereign, by constitutional mandate or by unwritten convention, are to open the annual session of the Riksdag, receive foreign ambassadors and sign the letters of credence
for Swedish ambassadors, chair the foreign advisory committee, preside at the special Cabinet council when a new prime minister takes office, and to be kept informed by the prime minister on matters of state.
In contrast, the only contact the president of Ireland
has with the Irish government is through a formal briefing session given by the taoiseach
(head of government) to the president. However, he or she has no access to documentation and all access to ministers goes through the Department of the Taoiseach
. The president does, however, hold limited reserve powers
, such as referring a bill to the Supreme Court
to test its constitutionality, which are used under the president's discretion.
The most extreme non-executive republican head of state is the president of Israel
, which holds no reserve powers whatsoever
. The least ceremonial powers held by the president are to appoint the prime minister
, to approve the dissolution of the Knesset
made by the prime minister, and to pardon criminals or to commute their sentence.
Some parliamentary republics (like South Africa
) have fused the roles of the head of state with the head of government (like in a presidential system), while having the sole executive officer, often called a president, being dependent on the Parliament's confidence to rule (like in a parliamentary system). While also being the leading symbol of the nation, the president in this system acts mostly as a prime minister since the incumbent must be a member of the legislature at the time of the election, answer question sessions
in Parliament, avoid motions of no confidence, etc.
Semi-presidential systems combine features of presidential and parliamentary systems, notably (in the president-parliamentary subtype) a requirement that the government be answerable to both the president and the legislature. The constitution
of the Fifth French Republic
provides for a prime minister
who is chosen by the president, but who nevertheless must be able to gain support in the National Assembly
. Should a president be of one side of the political spectrum and the opposition be in control of the legislature, the president is usually obliged to select someone from the opposition to become prime minister, a process known as Cohabitation
. President François Mitterrand
, a Socialist, for example, was forced to cohabit with the neo-Gaullist
(right wing) Jacques Chirac
, who became his prime minister from 1986 to 1988. In the French system, in the event of cohabitation, the president is often allowed to set the policy agenda in security and foreign affairs and the prime minister runs the domestic and economic agenda.
Other countries evolve into something akin to a semi-presidential system or indeed a full presidential system. Weimar Germany
, for example, in its constitution provided for a popularly elected president with theoretically dominant executive powers that were intended to be exercised only in emergencies, and a cabinet appointed by him from the Reichstag
, which was expected, in normal circumstances, to be answerable to the Reichstag. Initially, the president was merely a symbolic figure with the Reichstag dominant; however, persistent political instability, in which governments often lasted only a few months, led to a change in the power structure of the republic, with the president's emergency powers called increasingly into use to prop up governments challenged by critical or even hostile Reichstag votes. By 1932, power had shifted to such an extent that the German president, Paul von Hindenburg
, was able to dismiss a chancellor
and select his own person for the job, even though the outgoing chancellor possessed the confidence of the Reichstag while the new chancellor did not. Subsequently, President von Hindenburg used his power to appoint Adolf Hitler
as Chancellor without consulting the Reichstag.
Note: The head of state in a "presidential" system may not actually hold the title of "president" - the name of the system refers to any head of state who actually governs and is not directly dependent on the legislature to remain in office.
Some constitutions or fundamental laws provide for a head of state who is not only in theory but in practice chief executive, operating separately from, and independent from, the legislature. This system is known as a "presidential system" and sometimes called the "imperial model", because the executive officials of the government are answerable solely and exclusively to a presiding, acting head of state, and is selected by and on occasion dismissed by the head of state without reference to the legislature. It is notable that some presidential systems, while not providing for collective executive accountability to the legislature, may require legislative approval for individuals prior to their assumption of cabinet office and empower the legislature to remove a president from office (for example, in the United States of America
). In this case the debate centers on confirming them into office, not removing them from office, and does not involve the power to reject or approve proposed cabinet members en bloc
, so accountability does not operate in the same sense understood as a parliamentary system.
are a notable feature of constitutions in the Americas
, including those of Argentina
, El Salvador
; this is generally attributed to the strong influence of the United States
in the region, and as the United States Constitution
served as an inspiration and model for the Latin American wars of independence
of the early 19th century. Most presidents in such countries are selected by democratic means (popular direct or indirect election); however, like all other systems, the presidential model also encompasses people who become head of state by other means, notably through military dictatorship or coup d'état
, as often seen in Latin American
, Middle Eastern
and other presidential regimes. Some of the characteristics of a presidential system, such as a strong dominant political figure with an executive answerable to them, not the legislature can also be found among absolute monarchies
, parliamentary monarchies
and single party
) regimes, but in most cases of dictatorship, their stated constitutional models are applied in name only and not in political theory or practice.
In the 1870s in the United States, in the aftermath of the impeachment
of President Andrew Johnson
and his near-removal from office, it was speculated that the United States, too, would move from a presidential system to a semi-presidential or even parliamentary one, with the speaker of the House of Representatives
becoming the real
center of government as a quasi-prime minister.
This did not happen and the presidency, having been damaged by three late nineteenth and early twentieth century assassinations (Lincoln
) and one impeachment (Johnson), reasserted its political dominance by the early twentieth century through such figures as Theodore Roosevelt
and Woodrow Wilson
This may even lead to an institutional variability, as in North Korea
, where, after the presidency of party leader Kim Il-sung
, the office was vacant for years. The late president was granted the posthumous title (akin to some ancient Far Eastern traditions to give posthumous names and titles to royalty) of "Eternal President"
. All substantive power, as party leader, itself not formally created for four years, was inherited by his son Kim Jong-il
. The post of president was formally replaced on 5 September 1998, for ceremonial purposes, by the office of President of the Presidium of the Supreme People's Assembly
, while the party leader's post as chairman of the National Defense Commission
was simultaneously declared "the highest post of the state", not unlike Deng Xiaoping
earlier in the People's Republic of China
Complications with categorisation
While clear categories do exist, it is sometimes difficult to choose which category some individual heads of state belong to. In reality, the category to which each head of state belongs is assessed not by theory but by practice.
Head of state is the highest-ranking constitutional position in a sovereign state. A head of state has some or all of the roles listed below, often depending on the constitutional category (above), and does not necessarily regularly exercise the most power or influence of governance. There is usually a formal public ceremony when a person becomes head of state, or some time after. This may be the swearing in at the inauguration
of a president of a republic, or the coronation
of a monarch.
In many countries, official portraits
of the head of state can be found in government offices, courts of law, or other public buildings. The idea, sometimes regulated by law, is to use these portraits to make the public aware of the symbolic connection to the government, a practice that dates back to medieval times. Sometimes this practice is taken to excess, and the head of state becomes the principal symbol of the nation, resulting in the emergence of a personality cult
where the image of the head of state is the only visual representation of the country, surpassing other symbols such as the flag
Other common representations are on coins
, postage and other stamps
, sometimes by no more than a mention or signature; and public places, streets, monuments and institutions such as schools are named for current or previous heads of state. In monarchies (e.g., Belgium) there can even be a practice to attribute the adjective "royal" on demand based on existence for a given number of years. However, such political techniques can also be used by leaders without the formal rank of head of state, even party - and other revolutionary leaders without formal state mandate.
At home, heads of state are expected to render lustre to various occasions by their presence, such as by attending artistic or sports performances or competitions (often in a theatrical honour box, on a platform, on the front row, at the honours table), expositions, national day celebrations
, dedication events, military parades and war remembrances, prominent funerals, visiting different parts of the country and people from different walks of life, and at times performing symbolic acts such as cutting a ribbon
, ship christening
, laying the first stone. Some parts of national life receive their regular attention, often on an annual basis, or even in the form of official patronage.
As such invitations may be very numerous, such duties are often in part delegated
to such persons as a spouse, a head of government
or a cabinet minister
or in other cases (possibly as a message, for instance, to distance themselves without rendering offence) just a military officer or civil servant.
For non-executive heads of state there is often a degree of censorship by the politically responsible government (such as the head of government
). This means that the government discreetly approves agenda and speeches, especially where the constitution (or customary law) assumes all political responsibility by granting the crown inviolability (in fact also imposing political emasculation) as in the Kingdom of Belgium
from its very beginning; in a monarchy this may even be extended to some degree to other members of the dynasty, especially the heir to the throne.
Below follows a list of examples from different countries of general provisions in law, which either designate an office as head of state or define its general purpose.
The King is the Head of State, the symbol of its unity and permanence. He arbitrates and moderates the regular functioning of the institutions, assumes the highest representation of the Spanish State in international relations, especially with the nations of its historical community, and exercises the functions expressly conferred on him by the Constitution and the laws. The Emperor shall be the symbol of the State and of the unity of the People, deriving his position from the will of the people with whom resides sovereign power. The President of the Republic is the Head of the State and a symbol of the unity of the country and represents the sovereignty of the country. He shall guarantee the commitment to the Constitution and the preservation of Iraq's independence, sovereignty, unity, and the safety of its territories, in accordance with the provisions of the Constitution. (1)The President shall be the Head of State and represent the State vis-à-vis foreign states.(2)The President shall have the responsibility and duty to safeguard the independence, territorial integrity and continuity of the State and the Constitution. The President of the Republic shall be Head of State.He shall represent the State of Lithuania and shall perform everything with which he is charged by the Constitution and laws. Example 9 (semi-presidential republic):
Chapter 4, Article 80, Section 1-2 of the Constitution of Russia
1. The President of the Russian Federation shall be the Head of State.2. The President of the Russian Federation shall be the guarantor of the Constitution of the Russian Federation and of human and civil rights and freedoms. In accordance with the procedure established by the Constitution of the Russian Federation, he (she) shall adopt measures to protect the sovereignty of the Russian Federation, its independence and State integrity, and shall ensure the coordinated functioning and interaction of State government bodies.
In the majority of states, whether republics or monarchies, executive authority
is vested, at least notionally, in the head of state. In presidential systems the head of state is the actual, de facto
chief executive officer. Under parliamentary systems the executive authority is exercised by the head of state, but in practice is done so on the advice of the cabinet of ministers. This produces such terms as "Her Majesty's Government" and "His Excellency's Government." Examples of parliamentary systems in which the head of state is notional chief executive include Australia
and the United Kingdom
Subject to the limitations laid down in this Constitution Act the King shall have the supreme authority in all the affairs of the Realm, and he shall exercise such supreme authority through the Ministers. The executive power of the Commonwealth is vested in the Queen and is exercisable by the Governor-General as the Queen's representative, and extends to the execution and maintenance of this Constitution, and of the laws of the Commonwealth. The executive power of the union shall be vested in the President and shall be exercised by him either directly or indirectly through the officers subordinate to him in accordance to the Constitution. The President of the Russian Federation shall, in accordance with the Constitution of the Russian Federation and federal laws, determine the basic objectives of the internal and foreign policy of the State.
The few exceptions where the head of state is not even the nominal chief executive - and where supreme executive authority is according to the constitution explicitly vested in a cabinet - include the Czech Republic
Appointment of senior officials
The head of state usually appoints most or all the key officials in the government, including the head of government
and other cabinet ministers, key judicial figures; and all major office holders in the civil service
, foreign service
and commissioned officers in the military
. In many parliamentary systems, the head of government is appointed with the consent (in practice often decisive) of the legislature, and other figures are appointed on the head of government's advice.
In presidential systems, such as that of the United States, appointments are nominated by the president's sole discretion, but this nomination is often subject to confirmation by the legislature; and specifically in the US, the Senate
has to approve senior executive branch and judicial appointments by a simple majority vote.
The head of state may also dismiss office-holders. There are many variants on how this can be done. For example, members of the Irish Cabinet are dismissed by the president
on the advice of the taoiseach
; in other instances, the head of state may be able to dismiss an office holder unilaterally; other heads of state, or their representatives, have the theoretical power to dismiss any office-holder, while it is exceptionally rarely used.
, while the president
cannot force the prime minister
to tender the resignation of the government, he can, in practice, request it if the prime minister is from his own majority.
In presidential systems, the president often has the power to fire ministers at his sole discretion. In the United States, the unwritten convention calls for the heads of the executive departments
to resign on their own initiative when called to do so.
The King appoints and dismisses his ministers.The Federal Government offers its resignation to the King if the House of Representatives, by an absolute majority of its members, adopts a motion of no confidence proposing a successor to the prime minister for appointment by the King or proposes a successor to the prime minister for appointment by the King within three days of the rejection of a motion of confidence. The King appoints the proposed successor as prime minister, who takes office when the new Federal Government is sworn in. I - appoint and dismiss the Ministers of State:XIII -...appoint the commanders of Navy, Army and Air Force, to promote general officers and to appoint them to the offices held exclusively by them;XIV - appoint, after approval by the Senate, the Justices of the Supreme Federal Court and those of the superior courts, the Governors of the territories, the Attorney-General of the Republic, the President and the Directors of the Central Bank and other civil servants, when established by law;XV - appoint, with due regard for the provisions of article 73, the Justices of the Federal Court of Accounts;XVI - appoint judges in the events established by this Constitution and the Advocate-General of the Union;XVII - appoint members of the Council of the Republic, in accordance with article 89, VIIXXV - fill and abolish federal government positions, as set forth by law
Although many constitutions, particularly from the 19th century and earlier, make no explicit mention of a head of state in the generic sense of several present day international treaties, the officeholders corresponding to this position are recognised as such by other countries.
In a monarchy, the monarch
is generally understood to be the head of state.
The Vienna Convention on Diplomatic Relations
, which codified longstanding custom, operates under the presumption that the head of a diplomatic mission (i.e. ambassador
) of the sending state is accredited to the head of state of the receiving state.
The head of state accredits (i.e. formally validates) his or her country's ambassadors
(or rarer equivalent diplomatic mission chiefs, such as high commissioner
or papal nuncio
) through sending formal a Letter of Credence
(and a Letter of Recall at the end of a tenure) to other heads of state and, conversely, receives the letters of their foreign counterparts.
Without that accreditation, the chief of the diplomatic mission cannot take up their role and receive the highest diplomatic status. The role of a head of state in this regard, is codified in the Vienna Convention on Diplomatic Relations from 1961, which (as of 2017) 191 sovereign states has ratified
The head of state is often designated the high contracting party
in international treaties on behalf of the state; signs them either personally or has them signed in his/her name by ministers (government members or diplomats); subsequent ratification
, when necessary, may rest with the legislature
. The treaties constituting the European Union
and the European Communities
are noteworthy contemporary cases of multilateral treaties cast in this traditional format, as are the accession agreements of new member states.
However, rather than being invariably concluded between two heads of state, it has become common that bilateral treaties are in present times cast in an intergovernmental format, e.g., between the Government of X and the Government of Y
, rather than between His Majesty the King of X and His Excellency the President of Y
1) The Reigning Prince shall represent the State in all its relations with foreign countries, without prejudice to the requisite participation of the responsible Government.2) Treaties by which territory of the State would be ceded, State property alienated, sovereign rights or prerogatives of the State affected, a new burden imposed on the Principality or its citizens, or an obligation assumed that would limit the rights of the citizens of Liechtenstein shall require the assent of Parliament to attain legal force. The Federal President shall represent the Federation in its international relations. He shall conclude treaties with foreign states on behalf of the Federation. He shall accredit and receive envoys.
The President of the Republic shall accredit ambassadors and envoys extraordinary to foreign powers; foreign ambassadors and envoys extraordinary shall be accredited to him. Example 4 (semi-presidential republic):
Chapter 4, Article 86, Section 4 of the Constitution of Russia
a) shall direct the foreign policy of the Russian Federation;b) shall hold negotiations and sign international treaties of the Russian Federation;c) shall sign instruments of ratification;d) shall receive letters of credence and letters of recall of diplomatic representatives accredited to his (her) office.
In a constitutional monarchy or non-executive presidency, the head of state may de jure
hold ultimate authority over the armed forces but will only normally, as per either written law or unwritten convention, exercise their authority on the advice of their responsible ministers: meaning that the de facto
ultimate decision making on military manoeuvres is made elsewhere. The head of state will, regardless of actual authority, perform ceremonial duties related to the country's armed forces, and will sometimes appear in military uniform for these purposes; particularly in monarchies where also the monarch's consort and other members of a royal family
may also appear in military garb. This is generally the only time a head of state of a stable, democratic country will appear dressed in such a manner, as statesmen and public are eager to assert the primacy of (civilian, elected) politics over the armed forces
In military dictatorships
, or governments which have arisen from coups d'état
, the position of commander-in-chief is obvious, as all authority in such a government derives from the application of military force; occasionally a power vacuum created by war is filled by a head of state stepping beyond his or her normal constitutional role, as King Albert I of Belgium
did during World War I
. In these and in revolutionary regimes, the head of state, and often executive ministers
whose offices are legally civilian, will frequently appear in military uniform.
The King is Commander-in-Chief of the land and naval forces of the Realm. These forces may not be increased or reduced without the consent of the Storting. They may not be transferred to the service of foreign powers, nor may the military forces of any foreign power, except auxiliary forces assisting against hostile attack, be brought into the Realm without the consent of the Storting.The territorial army and the other troops which cannot be classed as troops of the line must never, without the consent of the Storting, be employed outside the borders of the Realm. Example 3 (parliamentary republic):
Chapter II, Article 87, 4th section of the Constitution of Italy
The President is the commander-in-chief of the armed forces, shall preside over the Supreme Council of Defense established by law, and shall make declarations of war as have been agreed by Parliament of Italy. Example 5 (semi-presidential republic):
According to Chapter 4, Article 87, Section 1 of the Constitution of Russia
The Emir is the Commander-in-Chief of the armed forces. He shall supervise the same with the assistance of Defence Council under his direct authority. The said Council shall be constituted by an Emiri Resolution, which will also determine the functions thereof.
Some countries with a parliamentary system
designate officials other than the head of state with command-in-chief powers.
It is usual that the head of state, particularly in parliamentary systems as part of the symbolic role, is the one who opens the annual sessions of the legislature, e.g. the annual State Opening of Parliament
with the Speech from the Throne
in Britain. Even in presidential systems the head of state often formally reports to the legislature on the present national status, e.g. the State of the Union address
in the United States of America, or the State of the Nation Address in South Africa.
Most countries require that all bills
passed by the house or houses of the legislature be signed into law by the head of state. In some states, such as the United Kingdom, Belgium and Ireland, the head of state is, in fact, formally considered a tier of the legislature. However, in most parliamentary systems, the head of state cannot refuse to sign a bill, and, in granting a bill their assent, indicate that it was passed in accordance with the correct procedures. The signing of a bill into law is formally known as promulgation
. Some monarchical states call this procedure royal assent
Example 1 (non-executive parliamentary monarchy):
Chapter 1, Article 4 of the Swedish Riksdag Act
The formal opening of a Riksdag session takes place at a special meeting of the Chamber held no later than the third day of the session. At this meeting, the Head of State declares the session open at the invitation of the Speaker. If the Head of State is unable to attend, the Speaker declares the session open. a) shall announce elections to the State Duma in accordance with the Constitution of the Russian Federation and federal law;c) shall announce referendums in accordance with the procedure established by federal constitutional law;d) shall submit draft laws to the State Duma;e) shall sign and promulgate federal laws;f) shall address the Federal Assembly with annual messages on the situation in the country and on the basic objectives of the internal and foreign policy of the State. III – start the legislative procedure, in the manner and in the cases set forth in this Constitution;IV - sanction, promulgate and order the publication of laws, as well as to issue decrees and regulations for the true enforcement thereof;V - veto bills, wholly or in part;XI - upon the opening of the legislative session, send a government message and plan to the National Congress, describing the state of the nation and requesting the actions he deems necessary;XXIII - submit to the National Congress the pluriannual plan, the bill of budgetary directives and the budget proposals set forth in this Constitution;XXIV - render, each year, accounts to the National Congress concerning the previous fiscal year, within sixty days of the opening of the legislative session
1. Any draft law passed by the Council shall be referred to the Emir for ratification.2. If the Emir, declines to approve the draft law, he shall return it a long with the reasons for such declination to the Council within a period of three months from the date of referral.3. In the event that a draft law is returned to the Council within the period specified in the preceding paragraph and the Council passes the same once more with a two-thirds majority of all its Members, the Emir shall ratify and promulgate it. The Emir may in compelling circumstances order the suspension of this law for the period that he deems necessary to serve the higher interests of the country. If, however, the draft law is not passed by a two-thirds majority, it shall not be reconsidered within the same term of session.
In some parliamentary systems, the head of state retains certain powers in relation to bills to be exercised at his or her discretion. They may have authority to veto a bill until the houses of the legislature have reconsidered it, and approved it a second time; reserve a bill to be signed later, or suspend it indefinitely (generally in states with royal prerogative
; this power is rarely used); refer a bill to the courts to test its constitutionality; refer a bill to the people in a referendum
If he or she is also chief executive, he or she can thus politically control the necessary executive measures without which a proclaimed law can remain dead letter, sometimes for years or even forever.
Summoning and dissolving the legislature
A head of state is often empowered to summon and dissolve the country's legislature
. In most parliamentary systems
, this is often done on the advice of the head of government
. In some parliamentary systems, and in some presidential systems, however, the head of state may do so on their own initiative. Some states have fixed term legislatures, with no option of bringing forward elections (e.g., Article II, Section 3, of the U.S. Constitution
). In other systems there are usually fixed terms, but the head of state retains authority to dissolve the legislature in certain circumstances. Where a head of government has lost support in the legislature, some heads of state may refuse a dissolution, where one is requested, thereby forcing the head of government's resignation.
The President may in absolute discretion refuse to dissolve Dáil Éireann on the advice of a Taoiseach who has ceased to retain the support of a majority in Dáil Éireann. b) shall dissolve the State Duma in the cases and in accordance with the procedure provided for by the Constitution of the Russian Federation;
Granting titles and honours
The King may bestow orders upon whomever he pleases as a reward for distinguished services, and such orders must be publicly announced, but no rank or title other than that attached to any office. The order exempts no one from the common duties and burdens of citizens, nor does it carry with it any preferential admission to senior official posts in the State. Senior officials honourably discharged from office retain the title and rank of their office. This does not apply, however, to Members of the Council of State or the State Secretaries.No personal, or mixed, hereditary privileges may henceforth be granted to anyone. Example 3 (parliamentary republic):
Title II, Article 87, 8th section of the Constitution of Italy
The King or Queen who is Head of State cannot be prosecuted for his or her actions. Nor can a Regent be prosecuted for his or her actions as Head of State. The King's person is sacred; he cannot be censured or accused. The responsibility rests with his Council. (1) President of the Republic may not be detained, subjected to criminal prosecution or prosecuted for offence or other administrative delict.(2) President of the Republic may be prosecuted for high treason at the Constitutional Court based on the Senate's suit. The punishment may be the loss of his presidential office and of his eligibility to regain it.(3) Criminal prosecution for criminal offences committed by the President of the Republic while executing his office shall be ruled out forever. 1. The President of the Republic answers before the Supreme Court of Justice for crimes committed in the exercise of his functions.2. Proceedings may only be initiated by the Assembly of the Republic, upon a motion subscribed by one fifth and a decision passed by a two-thirds majority of all the Members of the Assembly of the Republic in full exercise of their office.3. Conviction implies removal from office and disqualification from re-election.4. For crimes that are not committed in the exercise of his functions, the President of the Republic answers before the common courts, once his term of office has ended. The Emir is the head of State. His person shall be inviolable and he must be respected by all.
Where the institutions of the Republic, the independence of the Nation, the integrity of its territory or the fulfilment of its international commitments are under serious and immediate threat, and where the proper functioning of the constitutional public authorities is interrupted, the President of the Republic shall take measures required by these circumstances, after formally consulting the Prime Minister, the Presidents of the Houses of Parliament and the Constitutional Council.He shall address the Nation and inform it of such measures.The measures shall be designed to provide the constitutional public authorities as swiftly as possible, with the means to carry out their duties. The Constitutional Council shall be consulted with regard to such measures.Parliament shall sit as of right.The National Assembly shall not be dissolved during the exercise of such emergency powers.After thirty days of the exercise of such emergency powers, the matter may be referred to the Constitutional Council by the President of the National Assembly, the President of the Senate, sixty Members of the National Assembly or sixty Senators, so as to decide if the conditions laid down in paragraph one still apply. The Council shall make its decision publicly as soon as possible. It shall, as of right, carry out such an examination and shall make its decision in the same manner after sixty days of the exercise of emergency powers or at any moment thereafter.
The Emir may, be a decree, declare Martial Laws in the country in the event of exceptional cases specified by the law; and in such cases, he may take all urgent necessary measures to counter any threat that undermine the safety of the State, the integrity of its territories or the security of its people and interests or obstruct the organs of the State from performing their duties. However, the decree must specify the nature of such exceptional cases for which the martial laws have been declared and clarify the measures taken to address this situation. Al-Shoura Council shall be notified of this decree within the fifteen days following its issue; and in the event that the Council is not in session for any reason whatsoever, the Council shall be notified of the decree at its first convening. Martial laws shall be declared for a limited period and the same shall not be extended unless approved by Al-Shoura Council.
The Emir may, in the event of exceptional cases that require measures of utmost urgency which necessitate the issue of special laws and in case that Al-Shoura Council is not in session, issue pertinent decrees that have the power of law. Such decree-laws shall be submitted to Al-Shoura Council at its first meeting; and the Council may within a maximum period of forty days from the date of submission and with a two-thirds majority of its Members reject any of these decree-laws or request amendment thereof to be effected within a specified period of time; such decree-laws shall cease to have the power of law from the date of their rejection by the Council or where the period for effecting the amendments have expired.
Right of pardon
The King can grant pardons and amnesties. He may only pardon Ministers convicted by the Court of Impeachment with the consent of Parliament. He [The President] shall exercise the power to pardon individual offenders on behalf of the Federation. ...and he [The President] shall have Power to grant Reprieves and Pardons for Offences against the United States, except in Cases of Impeachment. (a) grant a pardon, either free or subject to lawful conditions, to a person convicted of an offence;(b) grant to a person a respite, either indefinite or for a specified period, of the execution of a punishment imposed on that person for an offence;(c) substitute a less severe form of punishment for any punishment imposed on a person for an offence; or(d) remit the whole or a part of a punishment imposed on a person for an offence or of a penalty or forfeiture on account of an offence.
and various monarchical titles are most commonly used for heads of state, in some nationalistic regimes, the leader adopts, formally or de facto, a unique style simply meaning leader in the national language, e.g., Germany's single national socialist party chief
and combined head of state and government, Adolf Hitler
, as the Führer
between 1934 and 1945.
In 1959, when former British
crown colony Singapore
gained self-government, it adopted the Malay style Yang di-Pertuan Negara
(literally means "head of state" in Malay
) for its governor (the actual head of state remained the British monarch). The second and last incumbent of the office, Yusof bin Ishak
, kept the style at 31 August 1963 unilateral declaration of independence and after 16 September 1963 accession to Malaysia
as a state (so now as a constituent part of the federation, a non-sovereign level). After its expulsion from Malaysia on 9 August 1965, Singapore became a sovereign Commonwealth republic
and installed Yusof bin Ishak as its first president.
In 1959 after the resignation of Vice PresidentMohammad Hatta
, President Sukarno
abolished the position and title of vice-president, assuming the positions of Prime Minister and Head of Cabinet. He also proclaimed himself president for life
: Presiden Seumur Hidup Panglima Tertinggi
" meaning "commander or martial figurehead", "tertinggi
" meaning "highest"; roughly translated to English as "Supreme Commander of the Revolution"). He was praised as "Paduka Yang Mulia
", a Malay honorific
originally given to kings; Sukarno awarded himself titles in that fashion due to his noble ancestry.
In some states the office of head of state is not expressed in a specific title reflecting that role, but constitutionally awarded to a post of another formal nature. Thus in March 1979 Colonel Muammar Gaddafi
, who kept absolute power (until his overthrow
in 2011 referred to as "Guide of the Revolution"), after ten years as combined Head of State and Head of government of the Libyan Jamahiriya
("state of the masses"), styled Chairman of the Revolutionary Command Council, formally transferred both qualities to the General secretaries of the General People's Congress (comparable to a Speaker) respectively to a Prime Minister, in political reality both were his creatures.
Sometimes a head of state assumes office as a state becomes legal and political reality, before a formal title for the highest office is determined; thus in the since 1 January 1960 independent republic Cameroon
, a former French colony), the first president, Ahmadou Babatoura Ahidjo
, was at first not styled président
but 'merely' known as chef d'état
- (French 'head of state') until 5 May 1960. In Uganda
, Idi Amin
the military leader after the coup of 25 January 1971 was formally styled military head of state
till 21 February 1971, only from then on regular (but unconstitutional, not elected) president.
Historical European perspectives
- The polis in Greek Antiquity and the equivalent city states in the feudal era and later, (many in Italy, the Holy Roman Empire, the Moorish taifa in Iberia, essentially tribal-type but urbanised regions throughout the world in the Maya civilisation, etc.) offer a wide spectrum of styles, either monarchic (mostly identical to homonyms in larger states) or republican, see Chief magistrate.
- Doges were elected by their Italian aristocratic republics from a patrician nobility, but "reigned" as sovereign dukes.
- The paradoxical term crowned republic refers to various state arrangements that combine "republican" and "monarchic" characteristics.
- The Netherlands historically had officials called stadholders and stadholders-general, titles meaning "lieutenant" or "governor", originally for the Habsburg monarchs.
In medieval Europe, it was universally accepted that the Pope
ranked first among all rulers and was followed by the Holy Roman Emperor
The Pope also had the sole right to determine the precedence of all others.
This principle was first challenged by a Protestant ruler, Gustavus Adolphus of Sweden
and was later maintained by his country at the Congress of Westphalia
Great Britain would later claim a break of the old principle for the Quadruple Alliance
in 1718.[note 2]
However, it was not until the 1815 Congress of Vienna
, when it was decided (due to the abolition of the Holy Roman Empire
in 1806 and the weak position of France and other catholic states to assert themselves) and remains so to this day, that all sovereign states are treated as equals, whether monarchies or republics.
On occasions when multiple heads of state or their representatives meet, precedence is by the host usually determined in alphabetical order (in whatever language the host determines, although French
has for much of the 19th and 20th centuries been the lingua franca
of diplomacy) or by date of accession.
Contemporary international law on precedence, built upon the universally admitted principles since 1815, derives from the Vienna Convention on Diplomatic Relations
(in particular, articles 13, 16.1 and Appendix iii).
Interim and exceptional cases
Whenever a head of state is not available for any reason, constitutional provisions may allow the role to fall temporarily to an assigned person or collective body. In a republic, this is - depending on provisions outlined by the constitution or improvised - a vice-president
, the chief of government, the legislature or its presiding officer. In a monarchy, this is usually a regent
or collegial regency (council). For example, in the United States the vice-president acts when the president is incapacitated, and in the United Kingdom the queen's powers may be delegated to counselors of state
when she is abroad or unavailable. Neither of the two co-princes of Andorra
is resident in Andorra; each is represented in Andorra by a delegate, though these persons hold no formal title.
There are also several methods of head of state succession
in the event of the removal, disability or death of an incumbent head of state.
In exceptional situations, such as war, occupation, revolution or a coup d'état
, constitutional institutions, including the symbolically crucial head of state, may be reduced to a figurehead or be suspended in favour of an emergency office (such as the original Roman dictator
) or eliminated by a new "provisionary" regime, such as a collective of the junta
type, or removed by an occupying force, such as a military governor
(an early example being the Spartan Harmost
Shared head of multiple states
The Commonwealth realms
share a monarch, currently Elizabeth II
. In the realms other than the United Kingdom, a governor-general (governor general
in Canada) is appointed by the sovereign, usually on the advice of the relevant prime minister (although sometimes it is based on the result of a vote in the relevant parliament, which is the case for Papua New Guinea
and the Solomon Islands
), as a representative and to exercise almost all the Royal Prerogative
according to established constitutional authority. In Australia the present queen is generally assumed to be head of state, since the governor-general and the state governors are defined as her "representatives".
However, since the governor-general performs almost all national regal functions, the governor-general has occasionally been referred to as head of state
in political and media discussion. To a lesser extent, uncertainty has been expressed in Canada
as to which officeholder—the monarch, the governor general, or both—can be considered the head of state. New Zealand, Papua New Guinea
explicitly name the monarch as their head of state (though Tuvalu's constitution states that "references in any law to the Head of State shall be read as including a reference to the governor-general"
). Governors-general are frequently treated as heads of state on state and official visits; at the United Nations
, they are accorded the status of head of state in addition to the sovereign.
An example of a governor-general departing from constitutional convention
by acting unilaterally (that is, without direction from ministers, parliament, or the monarch) occurred in 1926, when Canada's governor general refused the head of government's formal advice
requesting a dissolution of parliament and a general election. In a letter informing the monarch after the event, the Governor General said: "I have to await the verdict of history to prove my having adopted a wrong course, and this I do with an easy conscience that, right or wrong, I have acted in the interests of Canada and implicated no one else in my decision."
Another example occurred when, in the 1975 Australian constitutional crisis
, the governor-general unexpectedly dismissed the prime minister in order to break a stalemate between the House of Representatives and Senate over money bills. The governor-general issued a public statement saying he felt it was the only solution consistent with the constitution, his oath of office, and his responsibilities, authority, and duty as governor-general.
A letter from the queen's private secretary
at the time, Martin Charteris
, confirmed that the only person competent to commission an Australian prime minister was the governor-general and it would not be proper for the monarch to personally intervene in matters that the Constitution Act so clearly places within the governor-general's jurisdiction.
Religious heads of state
, certain dynasties adopted a title expressing their positions as "servant" of a patron deity of the state, but in the sense of a viceroy
under an absentee god-king
, ruling "in the name of" the patron god(ess), such as Patmanabha Dasa
(servant of Vishnu) in the case of the Maharaja
From the time of the 5th Dalai Lama
until the political retirement of the 14th Dalai Lama
in 2011, Dalai Lamas were both political and spiritual leaders ("god-king") of Tibet
Multiple or collective heads of state
In the Roman Republic
there were two heads of state, styled consul
, both of whom alternated months of authority during their year in office, similarly there was an even number of supreme magistrates in the Italic republics of Ancient Age. In the Athenian Republic
there were nine supreme magistrates, styled archons
. In Carthage
there were two supreme magistrates, styled kings or suffetes
(judges). In ancient Sparta
there were two hereditary kings, belonging to two dynasties. In the Soviet Union
the Central Executive Committee
of the Congress of Soviets
(between 1922 and 1938) and later the Presidium
of the Supreme Soviet
(between 1938 and 1989) served as the collective head of state
After World War II the Soviet model was subsequently adopted by almost all countries belonged to its sphere of influence
remained the only country among them that retained an office of president as a form of a single head of state throughout this period, followed by Romania
through the creation of that country's presidency by dictator Nicolae Ceausescu
A modern example of a collective head of state is the Sovereignty Council of Sudan
, the interim ruling council of Sudan
. The Sovereignty Council comprises 11 ministers, who together have exercised all governmental functions for Sudan since the fall of President Omar Al-Bashir
. Decisions are made either by consensus or by a super majority vote (8 members).
Such arrangements are not to be confused with supranational entities which are not states and are not defined by a common monarchy but may (or not) have a symbolic, essentially protocollary, titled highest office, e.g., Head of the Commonwealth
(held by the British crown, but not legally reserved for it) or 'Head of the Arab Union' (14 February - 14 July 1958, held by the Hashemite King of Iraq
, during its short-lived Federation with Jordan
, its Hashemite sister-realm).
The position of head of state can be established in different ways, and with different sources of legitimacy.
By fiction or fiat
Power can come from force, but formal legitimacy
is often established, even if only by fictitious claims of continuity (e.g., a forged claim of descent from a previous dynasty
). There have been cases of sovereignty granted by deliberate act, even when accompanied by orders of succession
(as may be the case in a dynastic split). Such grants of sovereignty are usually forced, as is common with self-determination
granted after nationalist
revolts. This occurred with the last Attalid
king of Hellenistic Pergamon
, who by testament left his realm to Rome to avoid a disastrous conquest.
By divine appointment
Under a theocracy, perceived divine status translated into earthly authority under divine law
. This can take the form of supreme divine authority above the state's, granting a tool for political influence to a priesthood
. In this way, the Amun
priesthood reversed the reforms of Pharaoh Akhenaten
after his death. The division of theocratic power can be disputed, as happened between the Pope and Holy Roman Emperor
in the investiture
conflict when the temporal power sought to control key clergy nominations in order to guarantee popular support, and thereby his own legitimacy, by incorporating the formal ceremony of unction
By social contract
By hereditary succession
The position of a monarch is usually hereditary
, but in constitutional monarchies
, there are usually restrictions on the incumbent's exercise of powers and prohibitions on the possibility of choosing a successor by other means than by birth. In a hereditary monarchy, the position of monarch is inherited according to a statutory or customary order of succession
, usually within one royal family
tracing its origin through a historical dynasty
or bloodline. This usually means that the heir to the throne is known well in advance of becoming monarch to ensure a smooth succession. However, many cases of uncertain succession in European history have often led to wars of succession
, in which the eldest child of the monarch is first in line to become monarch, is the most common system in hereditary monarchy. The order of succession is usually affected by rules on gender. Historically "agnatic primogeniture" or "patrilineal primogeniture" was favoured, that is inheritance according to seniority of birth among the sons of a monarch or head of family
, with sons and their male issue inheriting before brothers and their issue, and male-line
males inheriting before females of the male line.
This is the same as semi-Salic primogeniture. Complete exclusion of females from dynastic
succession is commonly referred to as application of the Salic law
(see Terra salica
Before primogeniture was enshrined in European law and tradition, kings would often secure the succession by having their successor (usually their eldest son) crowned during their own lifetime, so for a time there would be two kings in coregency
– a senior king and a junior king. Examples include Henry the Young King
of England and the early Direct Capetians
Sometimes, however, primogeniture can operate through the female line. In some systems a female may rule as monarch only when the male line dating back to a common ancestor is exhausted. In 1980, Sweden
, by rewriting its 1810 Act of Succession
, became the first European monarchy to declare equal (full cognatic) primogeniture, meaning that the eldest child of the monarch, whether female or male, ascends to the throne.
Other European monarchies (such as the Netherlands
in 1983, Norway
in 1990 and Belgium
in 1991) have since followed suit. Similar reforms were proposed in 2011
for the United Kingdom
and the other Commonwealth realms
, which came into effect in 2015 after having been approved by all of the affected nations. Sometimes religion
is affected; under the Act of Settlement 1701
all Roman Catholics
and all persons who have married Roman Catholics are ineligible to be the British monarch
and are skipped in the order of succession.
In some monarchies there may be liberty for the incumbent, or some body convening after his or her demise, to choose from eligible members of the ruling house
, often limited to legitimate
descendants of the dynasty's founder. Rules of succession may be further limited by state religion
, residency, equal marriage
or even permission from the legislature
Other hereditary systems of succession included tanistry
, which is semi-elective and gives weight to merit and Agnatic seniority
. In some monarchies, such as Saudi Arabia
, succession to the throne usually first passes to the monarch's next eldest brother, and only after that to the monarch's children (agnatic seniority).
Election usually is the constitutional way to choose the head of state of a republic, and some monarchies, either directly through popular election, indirectly by members of the legislature or of a special college of electors
(such as the Electoral College
in the United States
), or as an exclusive prerogative. Exclusive prerogative allows the heads of states of constituent monarchies of a federation to choose the head of state for the federation among themselves, as in the United Arab Emirates
. The Pope, head of state of Vatican City, is chosen by previously appointed cardinals
under 80 years of age from among themselves in a papal conclave
By force or revolution
By foreign imposition
Apart from violent overthrow, a head of state's position can be lost in several ways, including death, another by expiration of the constitutional term of office, abdication
, or resignation. In some cases, an abdication cannot occur unilaterally, but comes into effect only when approved by an act of parliament, as in the case of British King Edward VIII
. The post can also be abolished by constitutional change; in such cases, an incumbent may be allowed to finish his or her term. Of course, a head of state position will cease to exist if the state itself does.
Heads of state generally enjoy widest inviolability, although some states allow impeachment
, or a similar constitutional procedure by which the highest legislative or judicial authorities are empowered to revoke the head of state's mandate on exceptional grounds. This may be a common crime, a political sin, or an act by which he or she violates such provisions as an established religion mandatory for the monarch. By similar procedure, an original mandate may be declared invalid.
Former heads of state
The National Monument to Emperor Wilhelm I
in Berlin, Germany, dedicated 1897, nearly 10 years after his death. The monument was destroyed by the communist government in 1950.
of former heads of state can be designed to represent the history or aspirations of a state or its people, such as the equestrian bronze sculpture of Kaiser Wilhelm I
, first Emperor of a unified Germany
erected in Berlin at the end of the nineteenth century; or the Victoria Memorial
erected in front of Buckingham Palace
London, commemorating Queen Victoria and her reign (1837–1901), and unveiled in 1911 by her grandson, King George V
; or the monument
, placed in front of the Victoria Memorial Hall, Kolkata (Calcutta) (1921), commemorating Queen Victoria's reign as Empress of India
Another, twentieth century, example is the Mount Rushmore
National Memorial, a group sculpture constructed (1927–1941) on a conspicuous skyline in the Black Hills
of South Dakota
(40th state of the Union, 1889
), in the midwestern United States
, representing the territorial expansion of the United States in the first 130 years from its founding, which is promoted as the "Shrine of Democracy
Personal influence or privileges
Former presidents of the United States, while holding no political powers per se
, sometimes continue to exert influence in national and world affairs.
A monarch may retain his style and certain prerogatives after abdication, as did King Leopold III of Belgium
, who left the throne to his son after winning a referendum which allowed him to retain a full royal household deprived him of a constitutional or representative role. Napoleon
transformed the Italian principality of Elba
, where he was imprisoned, into a miniature version of his First Empire, with most trappings of a sovereign monarchy, until his Cent Jours
escape and reseizure of power in France convinced his opponents, reconvening the Vienna Congress
in 1815, to revoke his gratuitous privileges and send him to die in exile
on barren Saint Helena
By tradition, deposed monarchs who have not freely abdicated continue to use their monarchical titles as a courtesy
for the rest of their lives. Hence, even after Constantine II
ceased to be King of the Hellenes
, it is still common to refer to the deposed king and his family as if Constantine II were still on the throne, as many European royal courts and households do in guest lists at royal weddings, as in Sweden in 2010
, Britain in 2011
and Luxembourg in 2012
The Republic of Greece
oppose the right of their deposed monarch and former royal family members
to be referred to by their former titles or bearing a surname indicating royal status, and has enacted legislation which hinder acquisition of Greek citizenship
unless those terms are met. The former king brought this issue, along with property ownership issues, before the European Court of Human Rights
for alleged violations of the European Convention on Human Rights
, but lost with respect to the name issue.
However, some other states have no problem with deposed monarchs being referred to by their former title, and even allow them to travel internationally on the state's diplomatic passport
The Italian constitution provides that a former president of the Republic takes the title President Emeritus of the Italian Republic and he or she is also a senator for life, and enjoys immunity, flight status and official residences certain privileges.
- ^ It is listed as such in the current Constitution; it is thus equivalent to organs such as the State Council, rather than to offices such as that of the Premier.
- ^ On the occasion of a royal marriage in 1760, the premier of Portugal, the Marquis of Pombal, tried to maintain that the host, the King of Portugal, should as a crowned head have the sovereign right to determine the precedence of how ambassadors (apart from the papal nuncio and the imperial ambassador) would rank, based on the date of their credentials. The pragmatic suggestions of Pombal was not successful, and as the pretensions among the great powers were so deep-rooted, it would take the Napoleonic Wars for the great powers to have a fresh look at the issue.
- ^ a b Foakes, pp. 110–11 "[The head of state] being an embodiment of the State itself or representatitve of its international persona."
- ^ Foakes, p. 62
- ^ Kubicek, Paul (2015). European Politics. Routledge. pp. 154–56, 163. ISBN 978-1-317-34853-5.
- ^ Nicolaidis and Weatherill (ed.) (2003). "Whose Europe? National Models and the Constitution of the European Union" (PDF). Archived from the original (PDF) on 17 June 2015. Retrieved 23 December 2014.
- ^ Gouvea, C. P. (2013). "The Managerial Constitution: The Convergence of Constitutional and Corporate Governance Models". SSRN 2288315.
- ^ Belavusau, U. (2013). Freedom of speech: importing European and US constitutional models in transitional democracies. Routledge. ISBN 9781135071981. Archived from the original on 23 December 2014. Retrieved 23 December 2014.
- ^ Klug, Heinz (March 2003). "Postcolonial Collages: Distributions of Power and Constitutional Models, With Special Reference to South Africa". International Sociology. 18 (1): 114–131. doi:10.1177/0268580903018001007. S2CID 144612269.
- ^ Watts.
- ^ "Belgian King, Unable to Sign Abortion Law, Takes Day Off". The New York Times. 5 April 1990. Archived from the original on 21 March 2017. Retrieved 8 February 2017.
- ^ Art. 93. "Should the King find himself unable to reign, the ministers, having observed this inability, immediately summon the Chambers. Regency and guardianship are to be provided by the united Chambers." The Constitution of Belgium, Coordinated text of 14 February 1994 (last updated 8 May 2007)"Archived copy". Archived from the original on 1 June 2013. Retrieved 10 December 2014.
- ^ a b c d e f HEADS OF STATE, HEADS OF GOVERNMENT, MINISTERS FOR FOREIGN AFFAIRS Archived 25 August 2016 at the Wayback Machine, Protocol and Liaison Service, United Nations (8 April 2016). Retrieved on 15 April 2016.
- ^ a b c The Constitution of Japan Archived 14 December 2013 at the Wayback Machine, Office of the Prime Minister. Retrieved on 2 November 2012.
- ^ Japan in The World Factbook, Central Intelligence Agency. Retrieved on 11 November 2012.
- ^ a b c d The Instrument of GovernmentArchived 20 May 2014 at the Wayback Machine, Riksdag of Sweden. Retrieved on 2 November 2012.
- ^ Duties of the Monarch Archived 16 March 2015 at the Wayback Machine, Royal Court of Sweden. Retrieved on 1 November 2012.
- ^ a b c d Constitution of Ireland Archived 20 August 2015 at the Wayback Machine, Office of the Attorney General (December 2013). Retrieved 3 August 2014.
- ^ Lifetime portrait (1796), known as the "Lansdowne portrait", includes spines of two books titled "American Revolution" and "Constitution and Laws of the United States".
- ^ Chris Buckley and Adam Wu (10 March 2018). "Ending Term Limits for China's Xi Is a Big Deal. Here's Why. - Is the presidency powerful in China?". The New York Times. Archived from the original on 12 March 2018. Retrieved 28 September 2019. In China, the political job that matters most is the General Secretary of the Communist Party. The party controls the military and domestic security forces, and sets the policies that the government carries out. China’s presidency lacks the authority of the American and French presidencies.
- ^ Krishna Kanta Handique State Open University Archived 2 May 2014 at the Wayback Machine, EXECUTIVE: THE PRESIDENT OF THE CHINESE REPUBLIC.
- ^ "A simple guide to the Chinese government". South China Morning Post. Archived from the original on 13 May 2018. Retrieved 28 September 2019. Xi Jinping is the most powerful figure in the Chinese political system. He is the President of China, but his real influence comes from his position as the General Secretary of the Chinese Communist Party.
- ^ "China sets stage for Xi to stay in office indefinitely". Reuters. 25 February 2018. Archived from the original on 26 February 2018. Retrieved 28 September 2019. However, the role of party chief is more senior than that of president. At some point, Xi could be given a party position that also enables him to stay on as long as he likes.
- ^ a b c Constitution of the Principality of Liechtenstein (LR 101) Archived 8 August 2014 at the Wayback Machine (2009). Retrieved on 3 August 2014.
- ^ Constitution of the Republic of South Africa, 1996 Archived 25 April 2014 at the Wayback Machine, Department of Justice and Constitutional Development (2009). Retrieved on 3 August 2014.
- ^ Constitution of Botswana Archived 23 January 2013 at the Wayback Machine, Embassy of the Republic of Botswana in Washington DC. Retrieved on 11 November 2012.
- ^ a b THE CONSTITUTION OF NAURUArchived 1 February 2014 at the Wayback Machine, Parliament of Nauru. Retrieved on 11 November 2012.
- ^ "The Crown in Canada" (PDF). Department of Canadian Heritage. Archived from the original on 8 August 2014. Retrieved 31 August 2014.
- ^ The Queen's role in Canada Archived 20 February 2009 at the Wayback Machine, Royal Household. Retrieved on 2 November 2012.
- ^ Olympic Charter: in force as of 2 August 2016Archived 19 September 2016 at the Wayback Machine, International Olympic Committee (August 2016). Retrieved on 13 September 2016.
- ^ SPANISH CONSTITUTION Archived 21 April 2012 at the Wayback Machine, Senate of Spain. Retrieved on 2 November 2012.
- ^ a b Constitution Act 1986 Archived 17 October 2013 at the Wayback Machine, New Zealand Parliamentary Counsel Office. Retrieved on 28 August 2013.
- ^ a b Constitution of the Italian RepublicArchived 20 May 2012 at the Wayback Machine, Senate of the Republic. Retrieved on 2 November 2012.
- ^ Constitution of Iraq Archived 28 November 2016 at the Wayback Machine. Retrieved 3 August 2014.
- ^ a b CONSTITUTION OF THE PORTUGUESE REPUBLIC: SEVENTH REVISION (2005)Archived 23 June 2014 at the Wayback Machine, Portuguese Constitutional Court. Retrieved on 2 November 2012.
- ^ a b THE CONSTITUTION OF THE REPUBLIC OF KOREA Archived 10 March 2012 at the Wayback Machine, Constitutional Court of Korea. Retrieved on 2 November 2012.
- ^ The Constitution of the Republic of LithuaniaArchived 18 May 2019 at the Wayback Machine, Seimas. Retrieved on 2 November 2012
- ^ a b c d e f Constitution of the Russian Federation Archived 4 May 2013 at the Wayback Machine, Government of the Russian Federation. Retrieved on 2 November 2012.
- ^ CONSTITUTION OF THE ARGENTINE NATION Archived 4 June 2011 at the Wayback Machine, Argentine Senate. Retrieved on 16 November 2012.
- ^ a b My Constitutional Act with explanations, 9th edition Archived 18 June 2013 at the Wayback Machine, The Communications Section, Danish Parliament (August 2012). Retrieved on 11 November 2012.
- ^ The Constitution as in force on 1 June 2003 together with proclamation declaring the establishment of the Commonwealth, letters patent relating to the Office of Governor-General, Statute of Westminster Adoption Act 1942, Australia Act 1986. Archived 2 February 2012 at WebCite, ComLaw, Government of Australia (2003) ISBN 0 642 78285 7. Retrieved on 11 November 2012.
- ^ The Constitution Archived 14 November 2017 at the Wayback Machine, Publications Department, Hellenic Parliament (2008) ISBN 960 560 073 0. Retrieved on 11 November 2012.
- ^ Constitution of India, Part V Archived 24 August 2015 at the Wayback Machine, Ministry of Law and Justice. Retrieved on 11 November 2012.
- ^ a b c Constitution of the Federative Republic of Brazil: 3rd Edition, Chamber of Deputies (2010) ISBN 978-85-736-5737-1. Retrieved on 13 November 2012.
- ^ a b c d e f Constitution of the United StatesArchived 23 August 2011 at WebCite, National Archives and Records Administration. Retrieved on 11 November 2012.
- ^ a b c d e f Constitution of October 4, 1958Archived 1 March 2010 at the Wayback Machine, The French National Assembly. Retrieved on 11 November 2012.
- ^ a b THE BELGIAN CONSTITUTIONArchived 6 July 2011 at the Wayback Machine, Legal Department, Belgian House of Representatives (August 2012). Retrieved on 11 November 2012.
- ^ a b c d Vienna Convention on Diplomatic Relations 1961 Archived 17 August 2018 at the Wayback Machine, International Law Commission, United Nations. Retrieved on 15 October 2012.
- ^ a b Robertson: p. 221.
- ^ Roberts: pp. 35-44.
- ^ Roberts: pp. 71-79.
- ^ Roberts: pp. 61-68.
- ^ "Vienna Convention on Diplomatic Relations". United Nations Treaty Collection. United Nations. Archived from the original on 15 March 2017. Retrieved 27 June 2017.
- ^ a b Roberts: pp. 542-543.
- ^ Treaty of Lisbon (OJ C 306, 17.12.2007)Archived 16 March 2013 at the Wayback Machine, Official Journal of the European Union through EUR-Lex. Retrieved on 1 November 2012.
- ^ TREATY ON EUROPEAN UNION (92/C 191/01) aka Maastricht Treaty Archived 1 February 2009 at the Wayback Machine, Official Journal of the European Union through EUR-Lex. Retrieved on 11 November 2012.
- ^ a b c Basic Law for the Federal Republic of Germany Archived 19 June 2017 at the Wayback Machine, Bundestag (Print version. As at: October 2010). Retrieved on 11 November 2012.
- ^ Constitution of China Archived 26 July 2013 at the Wayback Machine, Chinese Government's Official Web portal. Retrieved 2 November 2012.
- ^ Alston, Philip (1995). Treaty-making and Australia: globalization versus sovereignty?. Annandale: Federation Press. p. 254. ISBN 978-1-86287-195-3.
- ^ Bayefsky, Anne F. (1993), "International Human Rights Law in Canadian Courts", in Kaplan, William; McRae, Donald Malcolm; Cohen, Maxwell (eds.), Law, policy and international justice: essays in honour of Maxwell Cohen, Montreal: McGill-Queen's Press, p. 112, ISBN 978-0-7735-1114-9, retrieved 16 January 2011
- ^ Flemming, Brian (1965). "Canadian Practice in International Law". The Canadian Yearbook of International Law. Vancouver: University of British Columbia Press. III: 337. Archived from the original on 12 April 2016. Retrieved 16 January 2011.
- ^ a b George VI (1 October 1947), Letters Patent Constituting the Office of Governor General of Canada, I, Ottawa: King's Printer for Canada, archived from the original on 24 September 2015, retrieved 29 May 2009
- ^ Office of the Governor General of Canada. "The Governor General - the evolution of Canada's oldest public institution". Queen's Printer for Canada. Archived from the original on 13 June 2011. Retrieved 16 January 2011.
- ^ "The Constitution Act, 1867". Archived from the original on 3 February 2010. Retrieved 29 November 2007.
- ^ a b c The Constitution, as laid down on 17 May 1814 by the Constituent Assembly at Eidsvoll and subsequently amended. Archived 15 May 2012 at the Wayback Machine, Information Service, Parliament of Norway. Retrieved on 11 November 2012.
- ^ a b c d Constitution of the State of QatarArchived 24 October 2004 at the Wayback Machine, Ministry of Foreign Affairs. Retrieved on 17 November 2012.
- ^ Basic Law of Israel: The Military Archived 27 August 2014 at the Wayback Machine, Knesset. Retrieved on 11 November 2011.
- ^ The Riksdag Act Archived 1 February 2013 at the Wayback Machine, Riksdag of Sweden. Retrieved on 16 November 2012.
- ^ Basic Law of Israel: The President of the State Archived 20 October 2017 at the Wayback Machine, Knesset. Retrieved on 11 November 2012.
- ^ Constitution of the Czech Republic Archived 16 July 2012 at the Wayback Machine, Prague Castle Administration. Retrieved on 11 November 2012.
- ^ "PLO body elects Abbas 'President of Palestine'", Khaleej Times Online, 24 November 2008, archived from the original on 8 June 2011
- ^ a b c d Roberts: p. 39.
- ^ Roberts: pp. 37-38.
- ^ Roberts: pp. 41-42.
- ^ a b Roberts: pp. 42-43.
- ^ Roberts: p. 43.
- ^ Constitution, s 2; Australia Act 1986 (Cth and UK), s 7.
- ^ Elizabeth II (1975), Constitution of the Independent State of Papua New Guinea, Port Moresby: World Intellectual Property Organization, Part 5, Division 1, (1)(a), archived from the original on 26 May 2015, retrieved 25 May 2015
- ^ Elizabeth II (1978), Constitution of Tuvalu, Funafuti: Pacific Islands Legal Information Institute, 48(1), archived from the original on 28 August 2015, retrieved 25 May 2015
- ^ Elizabeth II 1978, 51(2)
- ^ "Kerr's Statement Of Reasons". Archived from the original on 16 April 2016. Retrieved 17 December 2014.
- ^ Kerr, John (1978), Matters for Judgment, Macmillan, ISBN 978-0-333-25212-3
- ^ John Alexander Armstrong (1978). Ideology, Politics, and Government in the Soviet Union: An Introduction. University Press of America. p. 165. ISBN 978-0-8191-5405-7.
- ^ F. J. Ferdinand Joseph Maria Feldbrugge (1987). The distinctiveness of Soviet law. Martinus Nijhoff Publishers. p. 23. ISBN 90-247-3576-9. Archived from the original on 22 December 2018. Retrieved 20 December 2017.
- ^ "Ustav Socijalističke Federativne Republike Jugoslavije (1974.) – Wikizvor". hr.wikisource.org. Archived from the original on 5 February 2017. Retrieved 4 February 2017.
- ^ Murphy, Michael Dean. "A Kinship Glossary: Symbols, Terms, and Concepts". Archived from the original on 5 October 2006. Retrieved 5 October 2006.
- ^ Swedish Act of Succession (English Translation as of 2012) Archived 8 February 2014 at the Wayback Machine, The Riksdag. Retrieved on 28 August 2013.
- ^ a b "Deutsches Historisches Museum Berlin - Reinhold Begas - Monuments for the German Empire - Exhibition". Archived from the original on 9 February 2015. Retrieved 9 February 2015.
- ^ Frampton's Jubilee Monument for Queen Victoria, image with dog to show scale.Archived 9 May 2015 at the Wayback Machine
- ^ "Mount Rushmore National Memorial". TravelSouthDakota.com. Archived from the original on 8 February 2015. Retrieved 7 February 2015.
- ^ "Mount Rushmore". HISTORY.com. Archived from the original on 7 February 2015. Retrieved 7 February 2015.
- ^ Guests at the wedding ceremony: Wedding between Crown Princess Victoria and Mr Daniel Westling on Saturday 19 June 2010, 3.30 p.m., at Stockholm Cathedral Archived 29 July 2012 at the Wayback Machine, Royal Court of Sweden. Retrieved on 12 November 2012.
- ^ Selected Guest List for the Wedding Service at Westminster Abbey Archived 12 May 2012 at the Wayback Machine, The Royal Household (2011). Retrieved on 12 November 2012.
- ^ Selected guest list for the wedding service at Cathédrale Notre-Dame de Luxembourg on October 20, 2012 at 11:00 a.m. Archived 5 July 2014 at the Wayback Machine, Government of Luxembourg. Retrieved on 12 November 2012.
- ^ THE FORMER KING CONSTANTINOS OF GREECE AND 8 MEMBERS OF HIS FAMILY v. GREECE Archived 31 January 2013 at the Wayback Machine, (25701/94 | DECISION | COMMISSION (Plenary) | 21 April 1998) European Commission of Human Rights. Retrieved on 12 November 2012.
- ^ CASE OF THE FORMER KING OF GREECE AND OTHERS v. GREECE Archived 31 January 2013 at the Wayback Machine, (25701/94 | Judgment (Merits) | Court (Grand Chamber) | 23 November 2000), European Court of Human Rights. Retrieved on 12 November 2012.
Last edited on 30 April 2021, at 16:31
Content is available under CC BY-SA 3.0
unless otherwise noted. |
Color vision is an ability of animals to perceive differences between light composed of different wavelengths (i.e., different spectral power distributions) independently of light intensity. Color perception is a part of the larger visual system and is mediated by a complex process between neurons that begins with differential stimulation of different types of photoreceptors by light entering the eye. Those photoreceptors then emit outputs that are propagated through many layers of neurons and then ultimately to the brain. Color vision is found in many animals and is mediated by similar underlying mechanisms with common types of biological molecules and a complex history of evolution in different animal taxa. In primates, color vision may have evolved under selective pressure for a variety of visual tasks including the foraging for nutritious young leaves, ripe fruit, and flowers, as well as detecting predator camouflage and emotional states in other primates.
Isaac Newton discovered that white light after being split into its component colors when passed through a dispersive prism could be recombined to make white light by passing them through a different prism.
The visible light spectrum ranges from about 380 to 740 nanometers. Spectral colors (colors that are produced by a narrow band of wavelengths) such as red, orange, yellow, green, cyan, blue, and violet can be found in this range. These spectral colors do not refer to a single wavelength, but rather to a set of wavelengths: red, 625–740 nm; orange, 590–625 nm; yellow, 565–590 nm; green, 500–565 nm; cyan, 485–500 nm; blue, 450–485 nm; violet, 380–450 nm.
Sufficient differences in wavelength cause a difference in the perceived hue; the just-noticeable difference in wavelength varies from about 1 nm in the blue-green and yellow wavelengths to 10 nm and more in the longer red and shorter blue wavelengths. Although the human eye can distinguish up to a few hundred hues, when those pure spectral colors are mixed together or diluted with white light, the number of distinguishable chromaticities can be quite high.[ambiguous]
In very low light levels, vision is scotopic: light is detected by rod cells of the retina. Rods are maximally sensitive to wavelengths near 500 nm and play little, if any, role in color vision. In brighter light, such as daylight, vision is photopic: light is detected by cone cells which are responsible for color vision. Cones are sensitive to a range of wavelengths, but are most sensitive to wavelengths near 555 nm. Between these regions, mesopic vision comes into play and both rods and cones provide signals to the retinal ganglion cells. The shift in color perception from dim light to daylight gives rise to differences known as the Purkinje effect.
The perception of "white" is formed by the entire spectrum of visible light, or by mixing colors of just a few wavelengths in animals with few types of color receptors. In humans, white light can be perceived by combining wavelengths such as red, green, and blue, or just a pair of complementary colors such as blue and yellow.
There are a variety of colors in addition to spectral colors and their hues. These include grayscale colors, shades of colors obtained by mixing grayscale colors with spectral colors, violet-red colors, impossible colors, and metallic colors.
Grayscale colors include white, gray, and black. Rods contain rhodopsin, which reacts to light intensity, providing grayscale coloring.
Shades include colors such as pink or brown. Pink is obtained from mixing red and white. Brown may be obtain from mixing orange with grey or black. Navy is obtained from mixing blue and black.
Violet-red colors include hues and shades of magenta. The light spectrum is a line on which violet is one end and the other is red, and yet we see hues of purple that connect those two colors.
Impossible colors are a combination of cone responses that cannot be naturally produced. For example, medium cones cannot be activated completely on their own; if they were, we would see a 'hyper-green' color.
Physiology of color perception
Perception of color begins with specialized retinal cells known as cone cells. Cone cells contain different forms of opsin – a pigment protein – that have different spectral sensitivities. Humans contain three types, resulting in trichromatic color vision.
The cones are conventionally labeled according to the ordering of the wavelengths of the peaks of their spectral sensitivities: short (S), medium (M), and long (L) cone types. These three types do not correspond well to particular colors as we know them. Rather, the perception of color is achieved by a complex process that starts with the differential output of these cells in the retina and which is finalized in the visual cortex and associative areas of the brain.
For example, while the L cones have been referred to simply as red receptors, microspectrophotometry has shown that their peak sensitivity is in the greenish-yellow region of the spectrum. Similarly, the S cones and M cones do not directly correspond to blue and green, although they are often described as such. The RGB color model, therefore, is a convenient means for representing color but is not directly based on the types of cones in the human eye.
The peak response of human cone cells varies, even among individuals with so-called normal color vision; in some non-human species this polymorphic variation is even greater, and it may well be adaptive.[jargon]
Two complementary theories of color vision are the trichromatic theory and the opponent process theory. The trichromatic theory, or Young���Helmholtz theory, proposed in the 19th century by Thomas Young and Hermann von Helmholtz, posits three types of cones preferentially sensitive to blue, green, and red, respectively. Ewald Hering proposed the opponent process theory in 1872. It states that the visual system interprets color in an antagonistic way: red vs. green, blue vs. yellow, black vs. white. Both theories are generally accepted as valid, describing different stages in visual physiology, visualized in the adjacent diagram.:168 Green ←→ Magenta and Blue ←→ Yellow are scales with mutually exclusive boundaries. In the same way that there cannot exist a "slightly negative" positive number, a single eye cannot perceive a bluish-yellow or a reddish-green. Although these two theories are both currently widely accepted theories, past and more recent work has led to criticism of the opponent process theory, stemming from a number of what are presented as discrepancies in the standard opponent process theory. For example, the phenomenon of an after-image of complementary color can be induced by fatiguing the cells responsible for color perception, by staring at a vibrant color for a length of time, and then looking at a white surface. This phenomenon of complementary colors demonstrates cyan, rather than green, to be the complement of red and magenta, rather than red, to be the complement of green, as well as demonstrating, as a consequence, that the reddish-green color proposed to be impossible by opponent process theory is, in fact, the colour yellow. Although this phenomenon is more readily explained by the trichromatic theory, explanations for the discrepancy may include alterations to the opponent process theory, such as redefining the opponent colours as red vs. cyan, to reflect this effect. Despite such criticisms, both theories remain in use. A recent demonstration, using the Color Mondrian, has shown that, just as the colour of a surface that is part of a complex 'natural' scene is independent of the wavelength-energy composition of the light reflected from it alone but depends upon the composition of the light reflected from its surrounds as well, so the after image produced by looking at a given part of a complex scene is also independent of the wavelength energy-composition of the light reflected from it alone. Thus, while the color of the after-image produced by looking at a green surface that is reflecting more 'green' (middle-wave) than "red" (long-wave) light is magenta, so is the after image of the same surface when it reflects more "red" than "green" light (when it is still perceived as green). This would seem to rule out an explanation of color opponency based on retinal cone adaptation.
Cone cells in the human eye
A range of wavelengths of light stimulates each of these receptor types to varying degrees. The brain combines the information from each type of receptor to give rise to different perceptions of different wavelengths of light.
|Cone type||Name||Range||Peak wavelength|
|S||β||400–500 nm||420–440 nm|
|M||γ||450–630 nm||534–555 nm|
|L||ρ||500–700 nm||564–580 nm|
Cones and rods are not evenly distributed in the human eye. Cones have a high density at the fovea and a low density in the rest of the retina. Thus color information is mostly taken in at the fovea. Humans have poor color perception in their peripheral vision, and much of the color we see in our periphery may be filled in by what our brains expect to be there on the basis of context and memories. However, our accuracy of color perception in the periphery increases with the size of stimulus.
The opsins (photopigments) present in the L and M cones are encoded on the X chromosome; defective encoding of these leads to the two most common forms of color blindness. The OPN1LW gene, which encodes the opsin present in the L cones, is highly polymorphic; one study found 85 variants in a sample of 236 men. A small percentage of women may have an extra type of color receptor because they have different alleles for the gene for the L opsin on each X chromosome. X chromosome inactivation means that while only one opsin is expressed in each cone cell, both types may occur overall, and some women may therefore show a degree of tetrachromatic color vision. Variations in OPN1MW, which encodes the opsin expressed in M cones, appear to be rare, and the observed variants have no effect on spectral sensitivity.
Color in the human brain
Color processing begins at a very early level in the visual system (even within the retina) through initial color opponent mechanisms. Both Helmholtz's trichromatic theory and Hering's opponent-process theory are therefore correct, but trichromacy arises at the level of the receptors, and opponent processes arise at the level of retinal ganglion cells and beyond. In Hering's theory opponent mechanisms refer to the opposing color effect of red-green, blue-yellow, and light-dark. However, in the visual system, it is the activity of the different receptor types that are opposed. Some midget retinal ganglion cells oppose L and M cone activity, which corresponds loosely to red–green opponency, but actually runs along an axis from blue-green to magenta. Small bistratified retinal ganglion cells oppose input from the S cones to input from the L and M cones. This is often thought to correspond to blue–yellow opponency but actually runs along a color axis from yellow-green to violet.
Visual information is then sent to the brain from retinal ganglion cells via the optic nerve to the optic chiasma: a point where the two optic nerves meet and information from the temporal (contralateral) visual field crosses to the other side of the brain. After the optic chiasma, the visual tracts are referred to as the optic tracts, which enter the thalamus to synapse at the lateral geniculate nucleus (LGN).
The lateral geniculate nucleus is divided into laminae (zones), of which there are three types: the M-laminae, consisting primarily of M-cells, the P-laminae, consisting primarily of P-cells, and the koniocellular laminae. M- and P-cells receive relatively balanced input from both L- and M-cones throughout most of the retina, although this seems to not be the case at the fovea, with midget cells synapsing in the P-laminae. The koniocellular laminae receive axons from the small bistratified ganglion cells.
After synapsing at the LGN, the visual tract continues on back to the primary visual cortex (V1) located at the back of the brain within the occipital lobe. Within V1 there is a distinct band (striation). This is also referred to as "striate cortex", with other cortical visual regions referred to collectively as "extrastriate cortex". It is at this stage that color processing becomes much more complicated.
In V1 the simple three-color segregation begins to break down. Many cells in V1 respond to some parts of the spectrum better than others, but this "color tuning" is often different depending on the adaptation state of the visual system. A given cell that might respond best to long-wavelength light if the light is relatively bright might then become responsive to all wavelengths if the stimulus is relatively dim. Because the color tuning of these cells is not stable, some believe that a different, relatively small, population of neurons in V1 is responsible for color vision. These specialized "color cells" often have receptive fields that can compute local cone ratios. Such "double-opponent" cells were initially described in the goldfish retina by Nigel Daw; their existence in primates was suggested by David H. Hubel and Torsten Wiesel, first demonstrated by C.R. Michael and subsequently proven by Bevil Conway. As Margaret Livingstone and David Hubel showed, double opponent cells are clustered within localized regions of V1 called blobs, and are thought to come in two flavors, red–green and blue-yellow. Red-green cells compare the relative amounts of red-green in one part of a scene with the amount of red-green in an adjacent part of the scene, responding best to local color contrast (red next to green). Modeling studies have shown that double-opponent cells are ideal candidates for the neural machinery of color constancy explained by Edwin H. Land in his retinex theory.
From the V1 blobs, color information is sent to cells in the second visual area, V2. The cells in V2 that are most strongly color tuned are clustered in the "thin stripes" that, like the blobs in V1, stain for the enzyme cytochrome oxidase (separating the thin stripes are interstripes and thick stripes, which seem to be concerned with other visual information like motion and high-resolution form). Neurons in V2 then synapse onto cells in the extended V4. This area includes not only V4, but two other areas in the posterior inferior temporal cortex, anterior to area V3, the dorsal posterior inferior temporal cortex, and posterior TEO. Area V4 was initially suggested by Semir Zeki to be exclusively dedicated to color, and he later showed that V4 can be subdivided into subregions with very high concentrations of color cells separated from each other by zones with lower concentration of such cells though even the latter cells respond better to some wavelengths than to others, a finding confirmed by subsequent studies. The presence in V4 of orientation-selective cells led to the view that V4 is involved in processing both color and form associated with color but it is worth noting that the orientation selective cells within V4 are more broadly tuned than their counterparts in V1, V2 and V3. Color processing in the extended V4 occurs in millimeter-sized color modules called globs. This is the part of the brain in which color is first processed into the full range of hues found in color space.
Anatomical studies have shown that neurons in extended V4 provide input to the inferior temporal lobe. "IT" cortex is thought to integrate color information with shape and form, although it has been difficult to define the appropriate criteria for this claim. Despite this murkiness, it has been useful to characterize this pathway (V1 > V2 > V4 > IT) as the ventral stream or the "what pathway", distinguished from the dorsal stream ("where pathway") that is thought to analyze motion, among other features.
Subjectivity of color perception
Color is a feature of visual perception by an observer. There is a complex relationship between the wavelengths of light in the visual spectrum and human experiences of color. Although most people are assumed to have the same mapping, the philosopher John Locke recognized that alternatives are possible, and described one such hypothetical case with the "inverted spectrum" thought experiment. For example, someone with an inverted spectrum might experience green while seeing 'red' (700 nm) light, and experience red while seeing 'green' (530 nm) light. This inversion has never been demonstrated in experiment, though.
Synesthesia (or ideasthesia) provides some atypical but illuminating examples of subjective color experience triggered by input that is not even light, such as sounds or shapes. The possibility of a clean dissociation between color experience from properties of the world reveals that color is a subjective psychological phenomenon.
The Himba people have been found to categorize colors differently from most Westerners and are able to easily distinguish close shades of green, barely discernible for most people. The Himba have created a very different color scheme which divides the spectrum to dark shades (zuzu in Himba), very light (vapa), vivid blue and green (buru) and dry colors as an adaptation to their specific way of life.
The perception of color depends heavily on the context in which the perceived object is presented.
In color vision, chromatic adaptation refers to color constancy; the ability of the visual system to preserve the appearance of an object under a wide range of light sources. For example, a white page under blue, pink, or purple light will reflect mostly blue, pink, or purple light to the eye, respectively; the brain, however, compensates for the effect of lighting (based on the color shift of surrounding objects) and is more likely to interpret the page as white under all three conditions, a phenomenon known as color constancy.
In color science, chromatic adaptation is the estimation of the representation of an object under a different light source from the one in which it was recorded. A common application is to find a chromatic adaptation transform (CAT) that will make the recording of a neutral object appear neutral (color balance), while keeping other colors also looking realistic. For example, chromatic adaptation transforms are used when converting images between ICC profiles with different white points. Adobe Photoshop, for example, uses the Bradford CAT.
Color vision in nonhumans
Many species can see light with frequencies outside the human "visible spectrum". Bees and many other insects can detect ultraviolet light, which helps them to find nectar in flowers. Plant species that depend on insect pollination may owe reproductive success to ultraviolet "colors" and patterns rather than how colorful they appear to humans. Birds, too, can see into the ultraviolet (300–400 nm), and some have sex-dependent markings on their plumage that are visible only in the ultraviolet range. Many animals that can see into the ultraviolet range, however, cannot see red light or any other reddish wavelengths. For example, bees' visible spectrum ends at about 590 nm, just before the orange wavelengths start. Birds, however, can see some red wavelengths, although not as far into the light spectrum as humans. It is a myth that the common goldfish is the only animal that can see both infrared and ultraviolet light; their color vision extends into the ultraviolet but not the infrared.
The basis for this variation is the number of cone types that differ between species. Mammals, in general, have a color vision of a limited type, and usually have red-green color blindness, with only two types of cones. Humans, some primates, and some marsupials see an extended range of colors, but only by comparison with other mammals. Most non-mammalian vertebrate species distinguish different colors at least as well as humans, and many species of birds, fish, reptiles, and amphibians, and some invertebrates, have more than three cone types and probably superior color vision to humans.
In most Catarrhini (Old World monkeys and apes—primates closely related to humans), there are three types of color receptors (known as cone cells), resulting in trichromatic color vision. These primates, like humans, are known as trichromats. Many other primates (including New World monkeys) and other mammals are dichromats, which is the general color vision state for mammals that are active during the day (i.e., felines, canines, ungulates). Nocturnal mammals may have little or no color vision. Trichromat non-primate mammals are rare.:174–175
Many invertebrates have color vision. Honeybees and bumblebees have trichromatic color vision which is insensitive to red but sensitive to ultraviolet. Osmia rufa, for example, possess a trichromatic color system, which they use in foraging for pollen from flowers. In view of the importance of color vision to bees one might expect these receptor sensitivities to reflect their specific visual ecology; for example the types of flowers that they visit. However, the main groups of hymenopteran insects excluding ants (i.e., bees, wasps and sawflies) mostly have three types of photoreceptor, with spectral sensitivities similar to the honeybee's. Papilio butterflies possess six types of photoreceptors and may have pentachromatic vision. The most complex color vision system in the animal kingdom has been found in stomatopods (such as the mantis shrimp) having between 12 and 16 spectral receptor types thought to work as multiple dichromatic units.
Vertebrate animals such as tropical fish and birds sometimes have more complex color vision systems than humans; thus the many subtle colors they exhibit generally serve as direct signals for other fish or birds, and not to signal mammals. In bird vision, tetrachromacy is achieved through up to four cone types, depending on species. Each single cone contains one of the four main types of vertebrate cone photopigment (LWS/ MWS, RH2, SWS2 and SWS1) and has a colored oil droplet in its inner segment. Brightly colored oil droplets inside the cones shift or narrow the spectral sensitivity of the cell. Pigeons may be pentachromats.
Reptiles and amphibians also have four cone types (occasionally five), and probably see at least the same number of colors that humans do, or perhaps more. In addition, some nocturnal geckos and frogs have the capability of seeing color in dim light. At least some color-guided behaviors in amphibians have also been shown to be wholly innate, developing even in visually deprived animals.
In the evolution of mammals, segments of color vision were lost, then for a few species of primates, regained by gene duplication. Eutherian mammals other than primates (for example, dogs, mammalian farm animals) generally have less-effective two-receptor (dichromatic) color perception systems, which distinguish blue, green, and yellow—but cannot distinguish oranges and reds. There is some evidence that a few mammals, such as cats, have redeveloped the ability to distinguish longer wavelength colors, in at least a limited way, via one-amino-acid mutations in opsin genes. The adaptation to see reds is particularly important for primate mammals, since it leads to the identification of fruits, and also newly sprouting reddish leaves, which are particularly nutritious.
However, even among primates, full color vision differs between New World and Old World monkeys. Old World primates, including monkeys and all apes, have vision similar to humans. New World monkeys may or may not have color sensitivity at this level: in most species, males are dichromats, and about 60% of females are trichromats, but the owl monkeys are cone monochromats, and both sexes of howler monkeys are trichromats. Visual sensitivity differences between males and females in a single species is due to the gene for yellow-green sensitive opsin protein (which confers ability to differentiate red from green) residing on the X sex chromosome.
Color perception mechanisms are highly dependent on evolutionary factors, of which the most prominent is thought to be satisfactory recognition of food sources. In herbivorous primates, color perception is essential for finding proper (immature) leaves. In hummingbirds, particular flower types are often recognized by color as well. On the other hand, nocturnal mammals have less-developed color vision since adequate light is needed for cones to function properly. There is evidence that ultraviolet light plays a part in color perception in many branches of the animal kingdom, especially insects. In general, the optical spectrum encompasses the most common electronic transitions in the matter and is therefore the most useful for collecting information about the environment.
The evolution of trichromatic color vision in primates occurred as the ancestors of modern monkeys, apes, and humans switched to diurnal (daytime) activity and began consuming fruits and leaves from flowering plants. Color vision, with UV discrimination, is also present in a number of arthropods—the only terrestrial animals besides the vertebrates to possess this trait.
Some animals can distinguish colors in the ultraviolet spectrum. The UV spectrum falls outside the human visible range, except for some cataract surgery patients. Birds, turtles, lizards, many fish and some rodents have UV receptors in their retinas. These animals can see the UV patterns found on flowers and other wildlife that are otherwise invisible to the human eye.
Ultraviolet vision is an especially important adaptation in birds. It allows birds to spot small prey from a distance, navigate, avoid predators, and forage while flying at high speeds. Birds also utilize their broad spectrum vision to recognize other birds, and in sexual selection.
Mathematics of color perception
A "physical color" is a combination of pure spectral colors (in the visible range). In principle there exist infinitely many distinct spectral colors, and so the set of all physical colors may be thought of as an infinite-dimensional vector space (a Hilbert space). This space is typically notated Hcolor. More technically, the space of physical colors may be considered to be the topological cone over the simplex whose vertices are the spectral colors, with white at the centroid of the simplex, black at the apex of the cone, and the monochromatic color associated with any given vertex somewhere along the line from that vertex to the apex depending on its brightness.
An element C of Hcolor is a function from the range of visible wavelengths—considered as an interval of real numbers [Wmin,Wmax]—to the real numbers, assigning to each wavelength w in [Wmin,Wmax] its intensity C(w).
A humanly perceived color may be modeled as three numbers: the extents to which each of the 3 types of cones is stimulated. Thus a humanly perceived color may be thought of as a point in 3-dimensional Euclidean space. We call this space R3color.
Since each wavelength w stimulates each of the 3 types of cone cells to a known extent, these extents may be represented by 3 functions s(w), m(w), l(w) corresponding to the response of the S, M, and L cone cells, respectively.
Finally, since a beam of light can be composed of many different wavelengths, to determine the extent to which a physical color C in Hcolor stimulates each cone cell, we must calculate the integral (with respect to w), over the interval [Wmin,Wmax], of C(w)·s(w), of C(w)·m(w), and of C(w)·l(w). The triple of resulting numbers associates with each physical color C (which is an element in Hcolor) a particular perceived color (which is a single point in R3color). This association is easily seen to be linear. It may also easily be seen that many different elements in the "physical" space Hcolor can all result in the same single perceived color in R3color, so a perceived color is not unique to one physical color.
Thus human color perception is determined by a specific, non-unique linear mapping from the infinite-dimensional Hilbert space Hcolor to the 3-dimensional Euclidean space R3color.
Technically, the image of the (mathematical) cone over the simplex whose vertices are the spectral colors, by this linear mapping, is also a (mathematical) cone in R3color. Moving directly away from the vertex of this cone represents maintaining the same chromaticity while increasing its intensity. Taking a cross-section of this cone yields a 2D chromaticity space. Both the 3D cone and its projection or cross-section are convex sets; that is, any mixture of spectral colors is also a color.
In practice, it would be quite difficult to physiologically measure an individual's three cone responses to various physical color stimuli. Instead, a psychophysical approach is taken. Three specific benchmark test lights are typically used; let us call them S, M, and L. To calibrate human perceptual space, scientists allowed human subjects to try to match any physical color by turning dials to create specific combinations of intensities (IS, IM, IL) for the S, M, and L lights, resp., until a match was found. This needed only to be done for physical colors that are spectral, since a linear combination of spectral colors will be matched by the same linear combination of their (IS, IM, IL) matches. Note that in practice, often at least one of S, M, L would have to be added with some intensity to the physical test color, and that combination matched by a linear combination of the remaining 2 lights. Across different individuals (without color blindness), the matchings turned out to be nearly identical.
By considering all the resulting combinations of intensities (IS, IM, IL) as a subset of 3-space, a model for human perceptual color space is formed. (Note that when one of S, M, L had to be added to the test color, its intensity was counted as negative.) Again, this turns out to be a (mathematical) cone, not a quadric, but rather all rays through the origin in 3-space passing through a certain convex set. Again, this cone has the property that moving directly away from the origin corresponds to increasing the intensity of the S, M, L lights proportionately. Again, a cross-section of this cone is a planar shape that is (by definition) the space of "chromaticities" (informally: distinct colors); one particular such cross-section, corresponding to constant X+Y+Z of the CIE 1931 color space, gives the CIE chromaticity diagram.
This system implies that for any hue or non-spectral color not on the boundary of the chromaticity diagram, there are infinitely many distinct physical spectra that are all perceived as that hue or color. So, in general, there is no such thing as the combination of spectral colors that we perceive as (say) a specific version of tan; instead, there are infinitely many possibilities that produce that exact color. The boundary colors that are pure spectral colors can be perceived only in response to light that is purely at the associated wavelength, while the boundary colors on the "line of purples" can each only be generated by a specific ratio of the pure violet and the pure red at the ends of the visible spectral colors.
The CIE chromaticity diagram is horseshoe-shaped, with its curved edge corresponding to all spectral colors (the spectral locus), and the remaining straight edge corresponding to the most saturated purples, mixtures of red and violet.
|Wikimedia Commons has media related to Color vision.|
- Color blindness
- Color theory
- Inverted spectrum
- Primary color
- The dress
- Visual perception
- Vorobyev M (July 2004). "Ecology and evolution of primate colour vision". Clinical & Experimental Optometry. 87 (4–5): 230–8. doi:10.1111/j.1444-0938.2004.tb05053.x. PMID 15312027. S2CID 40234800.
- Carvalho LS, Pessoa D, Mountford JK, Davies WI, Hunt DM (26 April 2017). "The Genetic and Evolutionary Drives behind Primate Color Vision". Frontiers in Ecology and Evolution. 5. doi:10.3389/fevo.2017.00034.
- Hiramatsu C, Melin AD, Allen WL, Dubuc C, Higham JP (June 2017). "Experimental evidence that primate trichromacy is well suited for detecting primate social colour signals". Proceedings. Biological Sciences. 284 (1856): 20162458. doi:10.1098/rspb.2016.2458. PMC 5474062. PMID 28615496.
- Davson H, Perkins ES (7 August 2020). "Human eye". Encyclopedia Britannica.
- Nathans J, Thomas D, Hogness DS (April 1986). "Molecular genetics of human color vision: the genes encoding blue, green, and red pigments". Science. 232 (4747): 193–202. Bibcode:1986Sci...232..193N. doi:10.1126/science.2937147. JSTOR 169687. PMID 2937147. S2CID 34321827.
- Neitz J, Jacobs GH (1986). "Polymorphism of the long-wavelength cone in normal human colour vision". Nature. 323 (6089): 623–5. Bibcode:1986Natur.323..623N. doi:10.1038/323623a0. PMID 3773989. S2CID 4316301.
- Jacobs GH (January 1996). "Primate photopigments and primate color vision". Proceedings of the National Academy of Sciences of the United States of America. 93 (2): 577–81. Bibcode:1996PNAS...93..577J. doi:10.1073/pnas.93.2.577. PMC 40094. PMID 8570598.
- Hering E (1872). "Zur Lehre vom Lichtsinne". Sitzungsberichte der Mathematisch–Naturwissenschaftliche Classe der Kaiserlichen Akademie der Wissenschaften. K.-K. Hof- und Staatsdruckerei in Commission bei C. Gerold's Sohn. LXVI. Band (III Abtheilung).
- Ali MA, Klyne MA (1985). Vision in Vertebrates. New York: Plenum Press. ISBN 978-0-306-42065-8.
- Zeki S, Cheadle S, Pepper J, Mylonas D (2017). "The Constancy of Colored After-Images". Frontiers in Human Neuroscience. 11: 229. doi:10.3389/fnhum.2017.00229. PMC 5423953. PMID 28539878. Text was copied from this source, which is available under a Creative Commons Attribution 4.0 International License.
- Wyszecki G, Stiles WS (1982). Color Science: Concepts and Methods, Quantitative Data and Formulae (2nd ed.). New York: Wiley Series in Pure and Applied Optics. ISBN 978-0-471-02106-3.
- Hunt RW (2004). The Reproduction of Colour (6th ed.). Chichester UK: Wiley–IS&T Series in Imaging Science and Technology. pp. 11–2. ISBN 978-0-470-02425-6.
- Purves D, Augustine GJ, Fitzpatrick D, Katz LC, LaMantia AS, McNamara JO, Williams SM (2001). "Anatomical Distribution of Rods and Cones". Neuroscience. 2nd Edition.
- Johnson MA (February 1986). "Color vision in the peripheral retina". American Journal of Optometry and Physiological Optics. 63 (2): 97–103. doi:10.1097/00006324-198602000-00003. PMID 3953765.
- Verrelli BC, Tishkoff SA (September 2004). "Signatures of selection and gene conversion associated with human color vision variation". American Journal of Human Genetics. 75 (3): 363–75. doi:10.1086/423287. PMC 1182016. PMID 15252758.
- Roth M (2006). "Some women may see 100 million colors, thanks to their genes". Post-Gazette.com. Archived from the original on 2006-11-08.
- Rodieck RW (1998). The First Steps in Seeing. Sunderland, Massachusetts, USA: Sinauer Associates, Inc. ISBN 978-0-87893-757-8.
- Hendry SH, Reid RC (1970-01-01). "The koniocellular pathway in primate vision". Annual Review of Neuroscience. 23: 127–53. doi:10.1146/annurev.neuro.23.1.127. PMID 10845061.
- Daw NW (November 1967). "Goldfish retina: organization for simultaneous color contrast". Science. 158 (3803): 942–4. Bibcode:1967Sci...158..942D. doi:10.1126/science.158.3803.942. PMID 6054169. S2CID 1108881.
- Conway BR (2002). Neural Mechanisms of Color Vision: Double-Opponent Cells in the Visual Cortex. Springer. ISBN 978-1-4020-7092-1.
- Michael, C. R. (1978-05-01). "Color vision mechanisms in monkey striate cortex: dual-opponent cells with concentric receptive fields". Journal of Neurophysiology. 41 (3): 572–588. doi:10.1152/jn.19188.8.131.522. ISSN 0022-3077. PMID 96222.
- Conway BR (April 2001). "Spatial structure of cone inputs to color cells in alert macaque primary visual cortex (V-1)". The Journal of Neuroscience. 21 (8): 2768–83. doi:10.1523/JNEUROSCI.21-08-02768.2001. PMC 6762533. PMID 11306629.
- Dowling JE (2001). Neurons, and Networks: An Introduction to Behavioral Neuroscience. Harvard University Press. ISBN 978-0-674-00462-7.
- McCann M, ed. (1993). Edwin H. Land's Essays. Springfield, Va.: Society for Imaging Science and Technology.
- Judd DB, Wyszecki G (1975). Color in Business, Science and Industry. Wiley Series in Pure and Applied Optics (third ed.). New York: Wiley-Interscience. p. 388. ISBN 978-0-471-45212-6.
- Conway BR, Moeller S, Tsao DY (November 2007). "Specialized color modules in macaque extrastriate cortex". Neuron. 56 (3): 560–73. doi:10.1016/j.neuron.2007.10.008. PMC 8162777. PMID 17988638. S2CID 11724926.
- Conway BR, Tsao DY (October 2009). "Color-tuned neurons are spatially clustered according to color preference within alert macaque posterior inferior temporal cortex". Proceedings of the National Academy of Sciences of the United States of America. 106 (42): 18034–9. Bibcode:2009PNAS..10618034C. doi:10.1073/pnas.0810943106. PMC 2764907. PMID 19805195.
- Zeki SM (April 1973). "Colour coding in rhesus monkey prestriate cortex". Brain Research. 53 (2): 422–7. doi:10.1016/0006-8993(73)90227-8. PMID 4196224.
- Zeki S (March 1983). "The distribution of wavelength and orientation selective cells in different areas of monkey visual cortex". Proceedings of the Royal Society of London. Series B, Biological Sciences. 217 (1209): 449–70. Bibcode:1983RSPSB.217..449Z. doi:10.1098/rspb.1983.0020. PMID 6134287. S2CID 39700958.
- Bushnell BN, Harding PJ, Kosai Y, Bair W, Pasupathy A (August 2011). "Equiluminance cells in visual cortical area v4". The Journal of Neuroscience. 31 (35): 12398–412. doi:10.1523/JNEUROSCI.1890-11.2011. PMC 3171995. PMID 21880901.
- Tanigawa H, Lu HD, Roe AW (December 2010). "Functional organization for color and orientation in macaque V4". Nature Neuroscience. 13 (12): 1542–8. doi:10.1038/nn.2676. PMC 3005205. PMID 21076422.
- Zeki S (June 2005). "The Ferrier Lecture 1995 behind the seen: the functional specialization of the brain in space and time". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 360 (1458): 1145–83. doi:10.1098/rstb.2005.1666. PMC 1609195. PMID 16147515.
- Zeki, S. (1980). "The representation of colours in the cerebral cortex". Nature. 284 (5755): 412–418. doi:10.1038/284412a0. ISSN 1476-4687.
- Roberson D, Davidoff J, Davies IR, Shapiro LR (November 2006). "Colour categories and category acquisition in Himba and English" (PDF). In Pitchford N, Biggam CP (eds.). Progress in Colour Studies. II. Psychological aspects. John Benjamins Publishing. pp. 159–72. ISBN 978-90-272-9302-2.
- Fairchild MD (2005). "8. Chromatic Adaptation". Color Appearance Models. Wiley. p. 146. ISBN 978-0-470-01216-1.
- Süsstrunk S. "Chromatic Adaptation". The Image and Visual Representation Lab (IVRL). Archived from the original on 2011-08-18.
- Lindbloom B. "Chromatic Adaptation". Lindbloom.com. Archived from the original on 2011-09-26.
- Cuthill IC (1997). "Ultraviolet vision in birds". In Slater PJ (ed.). Advances in the Study of Behavior. 29. Oxford, England: Academic Press. p. 161. ISBN 978-0-12-004529-7.
- Jamieson BG (2007). Reproductive Biology and Phylogeny of Birds. Charlottesville VA: University of Virginia. p. 128. ISBN 978-1-57808-386-2.
- Varela FJ, Palacios AG, Goldsmith TH (1993). "Color vision of birds". In Zeigler HP, Bischof HJ (eds.). Vision, Brain, and Behavior in Birds. MIT Press. pp. 77–94. ISBN 978-0-262-24036-9.
- "True or False? The common goldfish is the only animal that can see both infrared and ultra-violet light". Skeptive. Archived from the original on December 24, 2013. Retrieved September 28, 2013.
- Neumeyer C (2012). "Chapter 2: Color Vision in Goldfish and Other Vertebrates". In Lazareva O, Shimizu T, Wasserman E (eds.). How Animals See the World: Comparative Behavior, Biology, and Evolution of Vision. Oxford Scholarship Online. ISBN 978-0-195-33465-4.
- Jacobs GH (August 1993). "The distribution and nature of colour vision among the mammals". Biological Reviews of the Cambridge Philosophical Society. 68 (3): 413–71. doi:10.1111/j.1469-185X.1993.tb00738.x. PMID 8347768. S2CID 24172719.
- Menzel R, Steinmann E, De Souza J, Backhaus W (1988-05-01). "Spectral Sensitivity of Photoreceptors and Colour Vision in the Solitary Bee, Osmia Rufa". Journal of Experimental Biology. 136 (1): 35–52. doi:10.1242/jeb.136.1.35. ISSN 0022-0949. Archived from the original on 2016-03-04.
- Osorio D, Vorobyev M (September 2008). "A review of the evolution of animal colour vision and visual communication signals". Vision Research. 48 (20): 2042–51. doi:10.1016/j.visres.2008.06.018. PMID 18627773. S2CID 12025276.
- Arikawa K (November 2003). "Spectral organization of the eye of a butterfly, Papilio". Journal of Comparative Physiology. A, Neuroethology, Sensory, Neural, and Behavioral Physiology. 189 (11): 791–800. doi:10.1007/s00359-003-0454-7. PMID 14520495. S2CID 25685593.
- Cronin TW, Marshall NJ (1989). "A retina with at least ten spectral types of photoreceptors in a mantis shrimp". Nature. 339 (6220): 137–40. Bibcode:1989Natur.339..137C. doi:10.1038/339137a0. S2CID 4367079.
- Kelber A, Vorobyev M, Osorio D (February 2003). "Animal colour vision--behavioural tests and physiological concepts". Biological Reviews of the Cambridge Philosophical Society. 78 (1): 81–118. doi:10.1017/S1464793102005985. PMID 12620062. S2CID 7610125.
- Thompson E (1995). "Introducing Comparative Colour Vision". Colour vision : a study in cognitive science and the philosophy of perception. London: Routledge. p. 149. ISBN 978-0-203-41767-6.
- Roth LS, Lundström L, Kelber A, Kröger RH, Unsbo P (March 2009). "The pupils and optical systems of gecko eyes". Journal of Vision. 9 (3): 27.1–11. doi:10.1167/9.3.27. PMID 19757966.
- Yovanovich CA, Koskela SM, Nevala N, Kondrashev SL, Kelber A, Donner K (April 2017). "The dual rod system of amphibians supports colour discrimination at the absolute visual threshold". Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences. 372 (1717). doi:10.1098/rstb.2016.0066. PMC 5312016. PMID 28193811.
- Hunt JE, Bruno JR, Pratt KG (May 12, 2020). "An Innate Color Preference Displayed by Xenopus Tadpoles Is Persistent and Requires the Tegmentum". Frontiers in Behavioral Neuroscience. 14 (71): 71. doi:10.3389/fnbeh.2020.00071. PMC 7235192. PMID 32477078.
- Shozo Yokoyama and F. Bernhard Radlwimmera, "The Molecular Genetics of Red and Green Color Vision in Mammals", Genetics, Vol. 153, 919–932, October 1999.
- Jacobs GH, Deegan JF (April 2001). "Photopigments and colour vision in New World monkeys from the family Atelidae". Proceedings. Biological Sciences. 268 (1468): 695–702. doi:10.1098/rspb.2000.1421. PMC 1088658. PMID 11321057.
- Jacobs GH, Deegan JF, Neitz J, Crognale MA, Neitz M (September 1993). "Photopigments and color vision in the nocturnal monkey, Aotus". Vision Research. 33 (13): 1773–83. CiteSeerX 10.1.1.568.1560. doi:10.1016/0042-6989(93)90168-V. PMID 8266633. S2CID 3745725.
- Mollon JD, Bowmaker JK, Jacobs GH (September 1984). "Variations of colour vision in a New World primate can be explained by polymorphism of retinal photopigments". Proceedings of the Royal Society of London. Series B, Biological Sciences. 222 (1228): 373–99. Bibcode:1984RSPSB.222..373M. doi:10.1098/rspb.1984.0071. PMID 6149558. S2CID 24416536.
- Sternberg RJ (2006). Cognitive Psychology (4th ed.). Thomson Wadsworth.
- Arrese CA, Beazley LD, Neumeyer C (March 2006). "Behavioural evidence for marsupial trichromacy". Current Biology. 16 (6): R193-4. doi:10.1016/j.cub.2006.02.036. PMID 16546067.
- Steven P (1997). How the Mind Works. New York: Norton. p. 191. ISBN 978-0-393-04535-2.
- Koyanagi M, Nagata T, Katoh K, Yamashita S, Tokunaga F (February 2008). "Molecular evolution of arthropod color vision deduced from multiple opsin genes of jumping spiders". Journal of Molecular Evolution. 66 (2): 130–7. Bibcode:2008JMolE..66..130K. doi:10.1007/s00239-008-9065-9. PMID 18217181. S2CID 23837628.
- Hambling D (May 30, 2002). "Let the light shine in: You don't have to come from another planet to see ultraviolet light". EducationGuardian.co.uk. Archived from the original on November 23, 2014.
- Jacobs GH, Neitz J, Deegan JF (October 1991). "Retinal receptors in rodents maximally sensitive to ultraviolet light". Nature. 353 (6345): 655–6. Bibcode:1991Natur.353..655J. doi:10.1038/353655a0. PMID 1922382. S2CID 4283145.
- Varela FJ, Palacios AG, Goldsmith TM (1993). Bischof HJ, Zeigler HP (eds.). Vision, brain, and behavior in birds. Cambridge, Mass: MIT Press. pp. 77–94. ISBN 978-0-262-24036-9.
- Cuthill IC, Partridge JC, Bennett AT, Church SC, Hart NS, Hunt S (2000). "Ultraviolet Vision in Birds". Advances in the Study of Behavior. 29. pp. 159–214.
- Jacobs DE, Gallo O, Cooper EA, Pulli K, Levoy M (May 2015). "Simulating the Visual Experience of Very Bright and Very Dark Scenes". ACM Trans. Graph. 34 (3): 15. doi:10.1145/2714573. S2CID 14960893.
- Biggs T, McPhail S, Nassau K, Patankar H, Stenerson M, Maulana F, Douma M. Smith SE (ed.). "What colors do animals see?". Web Exhibits. Institute for Dynamic Educational Advancement (IDEA).
- Feynman RP (2015). "Color Vision". In Gottlieb MA, Pfeiffer R (eds.). Feynman lectures on physics. Volume, Mainly mechanics, radiation, and heat (New millennium ed.). New York: Basic Books. ISBN 978-0-465-04085-8 – via California Institute of Technology.
- Gouras P (May 2009). "Color Vision". Webvision. University of Utah School of Medicine. PMID 21413395.
- McEvoy B (2008). "Color vision". Retrieved 2012-03-30.
- Rogers A (26 February 2015). "The Science of Why No One Agrees on the Color of This Dress". Wired. |
Presentation on theme: "Cameron Clary. Riemann Sums, the Trapezoidal Rule, and Simpson’s Rule are used to find the area of a certain region between or under curves that usually."— Presentation transcript:
Riemann Sums, the Trapezoidal Rule, and Simpson’s Rule are used to find the area of a certain region between or under curves that usually can not be integrated by hand.
Riemann Sums estimate the area under a curve by using the sum of areas of equal width rectangles placed under a curve. The more rectangles you have, the more accurate the estimated area.
Riemann Sums are placed on a closed integral with the formula: The interval is [a,b] and n is the number of rectangles used Is also called Δx and refers to the width of the rectangles Represents the height of the rectangles
There are three types of Riemann Sums: Left Riemann, Right Riemann, and Midpoint Riemann The left, right, and midpoint refer to the corners of the rectangles and how they are placed on the curve in order to estimate the area.
Left Riemann Sums place the left corner of the rectangles used to estimate the area on the curve. Left Riemann sums are an underestimation of the area under a curve due to the empty space between the rectangles and the curve.
Right Riemann sums place the Right corner of the rectangles on the curve. Right Riemann Sums are an overestimation of area because of all the extra space that is not under the curve that is still calculated in the area because it is inside the rectangles
Midpoint Riemann Sums place the middle of the Rectangle on the curve Midpoint Riemann Sums are the most accurate because the area found in the part of the rectangle that is over the curve makes up for the area lost in the space between the curve and the rectangle
First, find the width of the rectangles or Δx On the interval [0,1] with n=4 Then, starting with a, the first number on the interval, plug the numbers into the formula, adding Δx each time. So… *You should always begin with a and end with b, if not, you plugged in the numbers wrong
Once you have the numbers and You can plug them into the formula : When doing a Left Riemann, plug all numbers into the formula except for the last number. When doing a Right Riemann, plug in all numbers into the formula except for the first number. When doing a Midpoint Riemann, average the numbers and then plug in those values to the formula.
If the area can be found by hand, you can compare your answers from the Riemann Sums to the actual answer to see how accurate your estimation was In this problem, we can find the area by hand. The area for this problem is: Comparing the answers, the area found using the Left Riemann was under the amount of the actual area, the Right Riemann was over the amount of the actual area, and the Midpoint Riemann was the closest to the actual answer. None of the Riemann Sum types gave the exact answer, but that is because they are estimations.
Calculate the Left and Right Riemann Sum for on [0, π] using 4 rectangles.
Trapezoidal Rule is very similar to the Riemann Sums, but instead of using rectangles to approximate area, it uses trapezoids. The trapezoidal rule is more accurate than the Riemann sums.
When using the Trapezoidal Rule, use the formula: The reason all but the first and last functions are multiplied by two is because their sides are shared by two trapezoids. Is still added to each like in the Riemann Sums
Use the Trapezoidal Rule to Calculate on the interval [0,1] when n=4 Δx= Once you have all this information, all you have to do is plug the numbers into the formula
Just like in the Riemann Sums, if the area can be found by hand, you can sue that answer to check to see how close the estimate was to the exact answer. In this particular problem, the exact answer is 1/3units squared or.3333 units squared. Using the Trapezoidal Rule, the estimate comes out to be.34375 units squared. The estimated answer is very close to the exact answer.
Calculate the Trapezoidal Rule for on the interval [1,2] when n=5
1. Find the Left Riemann of on [0,2] when n=6 2. Find the Right Riemann of on [0,2] when n=6 3. Find the Midpoint Riemann of on [0,2] when n=6 4. Calculate the Trapezoidal rule for 5. Calculate the Simpson’s Rule for on [0,π] for n=4 on [2,4] where n=4
An Approximation of the integral of f(x)=x^2 on the interval [0, 100] Using a Midpoint Riemann Sum. Riemann. Web. 6 Mar. 2011. Anton, Howard. Calculus A New Horizon. Sixth ed. New York: John Wiley & Sons, Inc., 1999. Print. Bartkovich, Kevin, John Goebel, Julie Graves, and Daniel Teague. Contemporary Calculus through applications. Chicago, Illinois: Everyday Learning Corporation, 1999. Print. Beeson, Michael. It is possible to make a Riemann Sum. Riemann Sums, San Jose, California. MathXpert: Learning Mathematics in the 21st Century. Web. 6 Mar. 2011. Karl. Right Riemann Sum of a Parabola. Section 10: Integrals, Karl's Calc Tutor. http://www.karlscalculus.org/calc10_0.html. Web. 6 Mar. 2011. "ListenToYouTube.com: Youtube to MP3, get mp3 from youtube video, flv to mp3, extract audio from youtube, youtube mp3." Convert YouTube to MP3, Get MP3 from YouTube video, FLV to MP3, extract audio from YouTube, YouTube MP3 - ListenToYouTube.com.. Web. 6 Mar. 2011. <http://www.listentoyoutube.com/download.php?video=Et8Cjqy9 Trapezoidal Rule. Methods of Calculating Integrals. Spark Notes. Web. 6 Mar. 2011. |
Throughout this series, we will be tackling the basics such as:
(Part 1) How does DNS work?
(Part 2) Network Stack, OSI Model [You are here!]
(Part 3) HTTP Methods and Formats
(Part 4) Client Identification
(Part 5) Basic/Digest Authentication
(Part 6) HTTPS working with SSL/TLS
The Open Systems Interconnection (OSI) Model is a standardized model for telecommunication in computer systems. It does not regard the underlying technology, but instead the layers involved in communication. Let us explore the different layers within the OSI Model:
1. Application Layer
This layer allows applications to communicate over the network once the connection has been established, such as from the Web Browser (Application) to the Server. Examples of protocols in this layer include HTTP and TELNET.
HyperText Transfer Protocol (HTTP)
A set of rules for transferring files over the Internet. For example, when you enter the URL into the browser, the browser sends an HTTP request for the webpage. The host would then return the webpage, together with all the elements that are within, such as images, text, videos, styling fonts, etc.
2. Transport Layer
This layer is responsible for the host-to-host communication of messages. Examples of protocols in this layer include TCP and UDP.
Transmission Control Protocol (TCP)
The most common connection-oriented protocol. It defines how to establish and maintain a network conversation. It is responsible for establishing a connection (called a socket) between the client and the host in a 3-way handshake.
The user requesting the data will send a SYN data packet to the server, requesting synchronization. The server will then respond with a SYN-ACK to the user, indicating that it has acknowledged the data packet, and would like to connect as well. The connection is hence established when the user sends the last ACK to the server.
TCP is the most common due to its elegance, in which it is able to offer the following:
Establish a handshake protocol between end-points to ensure connection before data is exchanged, and transmit as a data stream (data packets).
Using checksums, it ensures that the data packets transmitted and received are the same. If there are missing/corrupted packets, it will request for re-transmission of the data packets by sending a NACK message to the sender.
The data packets are numbered and transmitted. As such, TCP will ensure that the received packets are re-ordered before delivering the application.
The rate of data transmission is regulated to improve efficiency while preventing buffer overruns/underruns, where data is sent faster than the receiver is able to process it, and vice versa.
The mechanics behind it are explained below in the TCP Slow Start section.
Basically, it is able to send over multiple streams of information concurrently over the same socket. These are done through different ports on the socket. We will discuss the differences between Multiplexing and Pipelining further along in the article.
User Datagram Protocol (UDP)
While similar to TCP, it is a connection-less protocol. It is the complete opposite of TCP, making it unreliable and unordered. Dropped packets will not be re-transmitted, causing gaps in the data.
However, that makes it best for time-sensitive applications, such as voice calls over the internet (VoIP). This is because it does not require the 3-way handshake before transmitting, making it fast. In addition, dropped data packets are not a problem in VoIP, as the human ear is very good at handling the short gaps that are typical with dropped packets.
3. Network Layer
This layer is responsible for providing data routing paths for network connections. Basically, it moves data packets across the network with the most logical path.
Internet Protocol (IP)
Defines the structure of the data packets, as well as labeling it with the source and destination information.
The source and destination information are in the form of IP Addresses, in which can be in the form
4. Link/ Physical Layer
This layer is the root of the OSI model, where information is transmitted either in the Local Area Network (LAN) for the Link Layer, and a physical signal such as electrical, mechanical medium in the form of code words or symbols in the Physical Layer.
tracert google.com, the route can be traced from the client-side (your computer) to the host (google.com).
From above, you can see the route starting from my device
192.168.1.254 to the router
10.243.128.1, before passing through the Internet Service Provider (ISP) located in Portugal, and so forth.
IP is only responsible for the structure of the data packet. As such, it will not make amends if the data packet is corrupted, or dropped. This is where TCP comes into play, numbering the data packets before sending over to the client. At the client’s side, TCP will request for re-transmission of lost/corrupted packets, and then rearrange the packets of data.
As we have mentioned earlier, HTTP can now make requests via the connection made by TCP Handshake. But how do they complement each other?
HTTP Persistent Connections
This would allow multiple HTTP request/response on a single TCP connection, as opposed to opening a new connection upon every request/response.
This is done through the HTTP Header, where
Connection: Keep-Alive. On default, the connection will only close upon another response where
Connection: Close is sent after 30 seconds of idle.
TCP Slow Start
As mentioned before, TCP supports flow control. This is done through TCP Slow Start, which is a form of prevention for network congestion.
The sender has a congestion window (CWND) and the receiver has a receiver window (RWND). If the data is larger than the congestion/receiver window, there would be a buffer under/overrun respectively.
To prevent that, the sender will begin by sending a data packet with a small congestion window (CWND = 1), to slowly probe the receiver for its receiver window.
The receiver will respond with an acknowledge, prompting the sender to double the data packets each time until no acknowledge is received. At this point, the optimum number of data packets has been discovered, allowing other congestion control algorithms to keep the connection at this speed.
Hence, TCP Slow Start is able to figure out the optimum number of data packets to send before the connection is closed. This will allow the amount of data sent from the host to the client to be optimized without the risk of buffer overrun (data is sent faster than it can be received).
Other HTTP Features
This feature in version HTTP/1.1 allows multiple requests to be sent at once on the same socket, without waiting for a response. However, it has been replaced by TCP Multiplexing in the newer version of HTTP/2.
The key difference is that although both allow for multiple requests all at once on the same socket, Pipelining would still require responses to be sent in order. It means that if the items requested are in the order (A, B, C), the client would not receive item C if item B has not been delivered properly.
In Multiplexing, the order does not matter. This would allow quicker delivery time.
These methods are best used for the idempotent method, which are methods that respond independently of the number of times requested — for example, requesting a web page multiple times will respond to the same web page.
Ever opened a webpage and seen multiple components of the webpage (video bar, thumbnails, buttons) load simultaneously?
This is made possible with Parallel Connections, where there is more than one TCP Connection established at the same time, allowing these components to load concurrently instead of one after another.
However, although it might seem to load faster, it might be held back by the client’s limited bandwidth. If all Parallel Connections are competing for the limited bandwidth, each component will load proportionately slower, resulting in zero advantage in total loading speed.
With the OSI Model, we can easily understand the big picture of networks, and how they interact with each other from hardware to software.
In general, it is a great teaching tool as well as a reference for troubleshooting. The model is also useful for design, as it investigates the functions at every layer, forcing one to ponder over the design layer by layer.
What I have gone through so far is the OSI 5-Layer Model, whereas there is also the OSI 7-Layer Model which also deals with Identification, Authentication and Data Encryption.
This is Part 2 of the HTTP Introductions Series. You can read the first article about the importance of DNS Servers in Part 1. Let’s explore the structure of HTTP Requests next in Part 3! |
Collect Raw Data
This step describes types of and ways to collect raw data (experimental results). Raw data includes observations (information collected about something by using your senses) made during testing. The two types of observations are qualitative and quantitative. A quantitative observation is a description of the amount of something. Numbers are used in quantitative descriptions. Instruments, such as a balance, a ruler, and a timer, are used to measure quantities or to describe the amount of the property being observed, such as mass, height, or time.
Metric measurements are generally the preferred units of measurement for science fair projects; for example, length in meters, mass in grams, volume in milliliters, and temperature in degrees Celsius. Another type of quantitative observation can be a scale that you design. For example, if your experiment involves measuring the change in the freshness of flowers, you might have a scale of freshness from 1 to 5, with 5 being the most fresh and having no dry parts on the petals and 1 being the least fresh with each petal being totally dry.
A qualitative observation is a description of the physical properties of something, including how it looks, sounds, feels, smells, and/or tastes. Words are used in a qualitative description. The qualitative description of a light could be about its color and would include words such as white, yellow, blue, and red.
As you collect raw data, record it in your log book. You want your log to be organized and neat, but you should not recopy the raw data for your journal. You should recopy the data that you will want to represent the information on your display in tables and/or graphs so that it is more easily understandable and meaningful to observers. (See chapter 10 for information about the project display.)
Data is generally recorded in a table, which is a chart in which information is arranged in rows and columns. A column is a vertical listing of data values and a row is a horizontal listing of data values. There are different ways of designing a table, but all tables should have a title (a descriptive heading) and rows and columns that are labeled. If your table shows measurements, the units of measurement, such as minutes or centimeters, should be part of the column's or row's label.
For an experimental data table, such as Table 8.1, the title generally describes the dependent variable of the experiment, such as "Moths' Attraction to Light," which in this case is for the data from an experiment where yellow and white lightbulbs (independent variable) are used and the number of moths attracted to each light is counted (dependent variable). In contrast, the title "White Light versus Yellow Light in the Attraction of Moths" expresses what is being compared. As a key part of the data organization, an average of each of the testings is calculated.
Analyzing and Interpreting Data
When you have finished collecting the data from your project, the next step is to interpret and analyze it. To analyze means to examine, compare, and relate all the data. To interpret the data means to restate it, which involves reorganizing it into a more easily understood form, such as by graphing it. A graph is a visual representation of data that shows a relationship between two variables. All graphs should have:
- A title.
- Titles for the x-axis (horizontal) and y-axis (vertical).
- Scales with numbers that have the same interval between each division.
- Labels for the categories being counted. Scales often start at zero, but they don't have to.
The three most common graphs used in science fair projects are the bar graph, the circle graph, and the line graph. Graphs are easily prepared using graphing software on a computer. But if these tools are not available to you, here are hints for drawing each type of graph.
In a bar graph, you use solid bar-like shapes to show the relationship between the two variables. Bar graphs can have vertical or horizontal bars. The width and separation of each bar should be the same. The length of a bar represents a specific number on a scale, such as 10 moths. The width of a bar is not significant and can depend on available space due to the number of bars needed. A bar graph has one scale, which can be on the horizontal or vertical axis. This type of graph is most often used when the independent variable is qualitative, such as the number of moths in Table 8.1. The independent variable for the Moths' Attraction to Light table is the color of light—white, yellow, or no light (control)—and the dependent variable for this data is the number of moths near each light. A bar graph using the data from Table 8.1 is shown in Figure 8.1. Since the average number of moths from the data varies from 1 to 12, a scale of 0 to 15 was used, with each unit of the scale representing 1 moth. The heights of the bars in the bar graph show clearly that some moths were found in the area without light and some near the yellow light, but the greatest number were present in the area with white light.
A circle graph (also called a pie chart) is a graph in which the area of a circle represents a sum of data, and the sizes of the pie-shaped pieces into which the circle is divided represent the amount of data. To plot your data on a circle graph, you need to calculate the size of each section. An entire circle represents 360°, so each section of a circle graph is a fraction of 360°. For example, data from Table 8.1 was used to prepare the circle graph in Figure 8.2. The size of each section in degrees was determined using the following steps:
- Express the ratio of each section as a fraction, with the numerator equal to the average number of moths counted on each type of light and the denominator equal to the average total number of moths counted on all the lights:
- White = 12/17
- Yellow = 4/17
- Control = 1/17
- White 12/17 × 360° = 254.1°
- Yellow 4/17 × 360° = 84.7°
- Control 1/17 × 360° = 21.2°
To prepare the circle graph, first decide on the diameter needed, then use a compass to draw a circle. Next draw a straight line from the center of the circle to any point on the edge of the circle. Using a protractor, start at this line and mark a dot on the edge of the circle 254.1° from the line. Draw a line to connect this dot to the center of the circle. The pie-shaped section you made represents the number of moths found near the white light. Start the next section on the outside line for the yellow light section. The remaining section will be the no-light section, or control section. Each section should be labeled as shown in Figure 8.2.
Each section of a circle graph represents part of the whole, which always equals 100%. The larger the section, the greater the percentage of the whole. So all of the sections added together must equal 100%.
To determine the percentage of each section, follow these steps:
- Change the fractional ratio for each section to a decimal by dividing the numerator by the denominator:
- White light: 12/17 = .70
- Yellow light: 4/17 = .24
- Control: 1/17 = .06
- White light: .70 = 70/100 = 70%
- Yellow light: .24 = 24/100 = 24%
- Control: .06 = 6/100 = 6%
To represent the percentage of moths attracted to each light color, you could color each section of the circle graph with a different color. You could label the percentages on the graph and make a legend explaining the colors of each section as in Figure 8.3.
A line graph is a graph in which one or more lines are used to show the relationship between the two quantitative variables. The line shows a pattern of change. While a bar graph has one scale, a line graph has two scales. Figure 8.4 shows a line graph of data from a different study in which the problem was to determine if ants communicate by laying a scent trail for other ants to follow to a food source. The line graph shows data for the number of ants observed on one of the paths every 15 minutes for 1 hour. Generally, the independent variable is on the x-axis (the vertical axis) and the dependent variable is on the x-axis (the horizontal axis). For this example, the independent variable of time is on the x-axis and the dependent variable of number of ants is on the x-axis. One unit on the time scale represents 1 minute, and units are marked off in groups of 15 up to a total of 60 units. One unit on the number of ants scale represents 1 ant. Since the largest average counted was 32.2 ants, the scale for ants is numbered by fives from 0 to 35. On the graph, the increase in the angle of the line over time shows that more ants were found on the food as time increased.
- Kindergarten Sight Words List
- First Grade Sight Words List
- 10 Fun Activities for Children with Autism
- Signs Your Child Might Have Asperger's Syndrome
- Theories of Learning
- A Teacher's Guide to Differentiating Instruction
- Child Development Theories
- Social Cognitive Theory
- Curriculum Definition
- Why is Play Important? Social and Emotional Development, Physical Development, Creative Development |
This article needs additional citations for verification. (December 2018)
Stratigraphy is a key concept to modern archaeological theory and practice. Modern excavation techniques are based on stratigraphic principles. The concept derives from the geological use of the idea that sedimentation takes place according to uniform principles. When archaeological finds are below the surface of the ground (as is most commonly the case), the identification of the context of each find is vital in enabling the archaeologist to draw conclusions about the site and about the nature and date of its occupation. It is the archaeologist's role to attempt to discover what contexts exist and how they came to be created. Archaeological stratification or sequence is the dynamic superimposition of single units of stratigraphy, or contexts.
Contexts are single events or actions that leave discrete, detectable traces in the archaeological sequence or stratigraphy. They can be deposits (such as the back-fill of a ditch), structures (such as walls), or "zero thickness surfaciques", better known as "cuts". Cuts represent actions that remove other solid contexts such as fills, deposits, and walls. An example would be a ditch "cut" through earlier deposits. Stratigraphic relationships are the relationships created between contexts in time, representing the chronological order in which they were created. One example would be a ditch and the back-fill of said ditch. The temporal relationship of "the fill" context to the ditch "cut" context is such that "the fill" occurred later in the sequence; you have to dig a ditch before you can back-fill it. A relationship that is later in the sequence is sometimes referred to as "higher" in the sequence, and a relationship that is earlier, "lower", though this does not refer necessarily to the physical location of the context. It is more useful to think of "higher" as it relates to the context's position in a Harris matrix, a two-dimensional representation of a site's formation in space and time.
Principles or laws
Archaeological stratigraphy is based on a series of axiomatic principles or "laws". They are derived from the principles of stratigraphy in geology but have been adapted to reflect the different nature of archaeological deposits. E.C. Harris notes two principles that were widely recognised by archaeologists by the 1970s:
- The principle of superposition establishes that within a series of layers and interfacial features, as originally created, the upper units of stratification are younger and the lower are older, for each must have been deposited on, or created by the removal of, a pre-existing mass of archaeological stratification.
- The principle that layers can be no older than the age of the most recent artefact discovered within them. This is the basis for the relative dating of layers using artefact typologies. It is analogous to the geological principle of faunal succession, although Harris argued that it was not strictly applicable to archaeology.
He also proposed three additional principles:
- The principle of original horizontality states that any archaeological layer deposited in an unconsolidated form will tend towards a horizontal deposition. Strata which are found with tilted surfaces were so originally deposited, or lie in conformity with the contours of a pre-existing basin of deposition.
- The principle of lateral continuity states that any archaeological deposit, as originally laid down, will be bounded by the edge of the basin of deposition, or will thin down to a feather edge. Therefore, if any edge of the deposit is exposed in a vertical plane view, a part of its original extent must have been removed by excavation or erosion: its continuity must be sought, or its absence explained.
- The principle of stratigraphic succession states that any given unit of archaeological stratification exists within the stratigraphic sequence from its position between the undermost of all higher units and the uppermost of all lower units and with which it has a physical contact.
Combining stratigraphic contexts for interpretation
Understanding a site in modern archaeology is a process of grouping single contexts together in ever larger groups by virtue of their relationships. The terminology of these larger clusters varies depending on the practitioner, but the terms interface, sub-group, and group are common. An example of a sub-group could be the three contexts that make up a burial; the grave cut, the body, and the back-filled earth on top of the body. Sub-groups can then be clustered together with other sub-groups by virtue of their stratigraphic relationship to form groups, which in turn form "phases." A sub-group burial could cluster with other sub-group burials to form a cemetery, which in turn could be grouped with a building, such as a church, to produce a "phase". Phase implies a nearly contemporaneous Archaeological horizon, representing "what you would see if you went back to time X".The production of phase interpretations is the first goal of stratigraphic interpretation and excavation.
Archaeologists investigating a site may wish to date the activity rather than artifacts on site by dating the individual contexts which represents events. Some degree of dating objects by their position in the sequence can be made with known datable elements of the archaeological record or other assumed datable contexts deduced by a regressive form of relative dating which in turn can fix events represented by contexts to some range in time. For example, the date of formation of a context which is totally sealed between two datable layers will fall between the dates of the two layers sealing it. However the date of contexts often fall in a range of possibilities so using them to date others is not a straightforward process.
Take the hypothetical section figure A. Here we can see 12 contexts, each numbered with a unique context number and whose sequence is represented in the Harris matrix in figure B.
- A horizontal layer
- Masonry wall remnant
- Backfill of the wall construction trench (sometimes called construction cut)
- A horizontal layer, probably the same as 1
- Construction cut for wall 2
- A clay floor abutting wall 2
- Fill of shallow cut 8
- Shallow pit cut
- A horizontal layer
- A horizontal layer, probably the same as 9
- Natural sterile ground formed before human occupation of the site
- Trample in the base of cut 5 formed by workmen's boots constructing the structure wall 2 and floor 6 is associated with.
If we know the date of context 1 and context 9 we can deduce that context 7, the backfilling of pit 8, occurred sometime after the date for 9 but before the date for 1, and if we recover an assemblage of artifacts from context 7 that occur nowhere else in the sequence, we have isolated them with a reasonable degree of certainty to a discrete range of time. In this instance we can now use the date we have for finds in context 7 to date other sites and sequences. In practice a huge amount of cross referencing with other recorded sequences is required to produce dating series from stratigraphic relationships such as the work in seriation.
Residual and intrusive finds
One issue in using stratigraphic relationships is that the date of artifacts in a context does not represent the date of the context, but just the earliest date the context could be. If one looks at the sequence in figure A, one may find that the cut for the construction of wall 2, context 5, has cut through layers 9 and 10, and in doing so has introduced the possibility that artifacts from layers 9 and 10 may be redeposited higher up the sequence in the context representing the backfill of the construction cut, context 3. These artifacts are referred to as "residual" or "residual finds". It is crucial that dating a context is based on the latest dating evidence drawn from the context. We can also see that if the fill of cut 5 – the wall 2, backfill 3 and trample 12 — are not removed entirely during excavation because of "undercutting", non-residual artifacts from these later "higher" contexts 2, 3 and 12 could contaminate the excavation of earlier contexts such as 9 and 10 and give false dating information. These artifacts may be termed intrusive finds.
- Archaeological association
- Archaeological context
- Archaeological phase
- Christian Maclagan
- Relationship (archaeology)
- Reverse stratigraphy
- Harris, E. C. (1989) Principles of Archaeological Stratigraphy, 2nd Edition. Academic Press: London and San Diego.ISBN 0-12-326651-3
- A. Carandini, Storie dalla terra. Manuale di scavo archeologico, Torino, Einaudi, 1991
|Wikimedia Commons has media related to Archaeological stratigraphy.|
Media files used on this page
Author/Creator: Giovanni Dall'Orto, Licence: Attribution
A nicely neat stratigraphy at the edge of the excavation area in the Kerameikos Cemetery (Athens). Picture by Giovanni Dall'Orto, November 12 2009.
graphic created for article Harris matrix
graphic created for article harris matrix |
As they move to secondary school and become more independent, children at this age will be facing more money choices and challenges. Now is the time to help them learn how to be responsible about money.
How does taking about money help?
Talking about money with them when they’re becoming more independent is important in establishing good money habits they can take into adulthood.
Our research shows that adults who do better with money were:
- exposed to conversations about money as children
- given money regularly, such as pocket money or payment for chores
- given responsibility for spending and saving from an early age.
This is a great age for learning about money, particularly as they’ll soon be going to secondary school or may have just started.
Secondary school brings strong peer pressure. Although they’ll know more about money, nine to 12-year-olds will also be facing more money choices and challenges. For example, many secondary schools use cards for payment; they’ll need to understand how these work and how to choose what to buy every day.
Did you know?
Our research shows that only four in ten children say they were taught about money and finance in school.
What nine to 12-year-olds understand about money
By the age of nine to 12, many children have a longer attention span and can understand a lot more about money. This includes:
- simple calculations
- that different currencies are used in other countries
- how to plan and manage a basic budget, and keep track of spending
- how to check basic financial information such as receipts, bills, and bank statements
- how advertising is used to persuade people to spend money
- how to compare prices and decide what’s best value for money
- that there are risks associated with spending money online, such as scams
- what bank interest is
- the benefits of saving
- the risks involved in borrowing money.
Pocket money and encouraging children to save
How much pocket money you give isn’t important. Giving children even the smallest amount of money regularly is a great way to help them learn how to manage money.
This could be pocket money or paying them for chores they do around the house, or both.
This helps them to practise learning to save up for the things they really want.
There are many ways to handle pocket money. Some options include:
Weekly pocket money – let them save up for what they want and give treats only at special times such as birthdays or holidays. This can help teach them to budget and save.
Opportunities to earn – they could earn pocket money by doing chores around the house.
Weekly pocket money and opportunities to earn extra money – this way they get regular money to manage, but can earn extra if they want. Although you may need to set a limit on what they can earn if you are budgeting yourself. Explain this limit to your child so they can see you managing your finances.
Before you decide whether you want to give pocket money or have your child earn it, think about what your child will have to buy from their money. Will you buy them extra items or do they pay for it all? Thinking this through will help you to budget and your child to learn what they need to save for.
Build responsibility around money
Use their growing independence to help them learn how to be responsible with money.
Mobile phone responsibility
If they haven’t already asked for a mobile phone, now may be the time they do. With a phone comes the need for financial responsibility.
If you decide they can have a mobile phone, use it as an opportunity to talk about money. Ask them questions that encourage them to think about the impact of a mobile phone on their finances:
- How much do phones cost?
- What is a contract and how does it work?
- How do contracts compare to ‘pay as you go’?
- How much does it cost each month?
- What happens if they use up all their credit?
- What happens if they lose the phone?
Now is also a good time to set up some rules around the use of mobile phones – particularly around money and safety.
“James was given a contract phone when he was 11 years old and had just started secondary school. We got a cap on it so he couldn’t run up any high bills. He can’t download apps without asking as I have the password. He also has to let me check the apps he uses every weekend.” – Amanda
A budget, or money plan, will help them look after their money:
- Encourage them to keep track of their money by writing it down.
- Get them to write down any money they get, how much, and what they spend it on.
- Take a look at it with them regularly, maybe monthly, perhaps. Let them explain their notes to you and talk about what it feels like to see their money going up or down. Also, how they feel when they know they’re reaching the limit of their budget but aren’t due any more money yet. Decide what happens if they go over their budget.
This is also a good opportunity to tell them how you budget to make sure you can pay the bills.
By this age, children can understand more about borrowing money.
It’s valuable for them to know that when you borrow money you usually have to pay it back with interest, so you’re paying back more than you borrowed.
It might also be worth talking about what problems there might be if you can’t pay it back.
Ask them questions about borrowing:
- What do they think are the pros and cons of borrowing money?
- What would they do if they couldn’t pay back the money they borrowed?
- How can they avoid the need to borrow money?
This is also a good time to discuss the importance of saving and having enough money for a rainy day. And the independence that comes with saving and not having to borrow.
Give them a savings challenge
Peer pressure is strong for nine to 12-year-olds, which may mean they want more things. Whether it’s a pair of branded trainers everyone at school is wearing or a new piece of technology.
This can help motivate them to practise saving. This is a good opportunity to help them child save for whatever it is they’ve set their sights on?
- Sit down with them, with a piece of paper or computer.
- Note down the cost of the item they want.
- Ask the question – ‘how can you get that amount of money?’
- Note down your child’s income, whether that comes from chores, pocket money or grandparents, for example.
- Discuss your child’s weekly spending and how much they could realistically put away to buy the item they want. Encourage them to come up with ideas for how to either earn more or reduce their current spending.
Important money messages
Children can learn valuable ideas from the savings challenge that will help them as adults:
- Saving allows them to have things they wouldn’t be able to afford if they always spend money as soon as they get it.
- Saving brings a sense of achievement, and the item they save up for has special value.
- Planning to save for something with someone else’s help makes goals easier to achieve.
Taking saving further
If they’re doing well with the savings challenge and you want to help them go further, when they’ve achieved one goal, set another.
Gradually increase the size of the goal and the length of time to achieve it, but make sure that it’s realistic – and fun.
Help them think of ways they might be able to save more, perhaps by increasing their income by doing extra chores.
The importance of choice
At this age, you can also explain to them about their own power to make choices around money.
You can explain that this could be choosing to:
- buy or not buy something
- save money
- buy less expensive brands
- give their time rather than expensive gifts
- not buy something just because it’s the latest version.
“When Natalie went to secondary school, I gave her a cheap mobile phone, added her to my contract, and told her she had two choices: to use her phone whenever she wanted; or to manage her use and never go over the text and data allowance. If she took the second choice and achieved it, she would get a decent touch screen phone.” - Alex
More-money management activities
All children develop at different times. For example, some nine to 12-year-olds may respond better to some of the activities we recommend in our How to talk to seven and eight-year-oldabout money or How to teach teenagers about money guides. Simply choose the ones you feel are the most suitable.
For more ideas for all age groups, download our Talk, Learn, Do guideopens in new window (also available in Welsh).
Did you find this guide helpful?
Thank you for your feedback |
A ratio is the comparison or simplified form of two quantities of the same kind. This relation indicates how many times one quantity is equal to the other; or in other words, ratio is a number, which expresses one quantity as a fraction of the other. E.g. Ratio of 3 to 4 is 3 : 4. In this article we will learn the approach applicable to solve various problems on ratios and proportions. In the next few lines you will go through some important concepts related to ratio problems and the methods you should apply to solve those problems. We would like to mention here that you must solve some Ratio and Proportion worksheets to get expertise in this area, only then you would be comfortable in attempting the questions of this area in the exam.
The numbers forming the ratio are called terms. The numerator, “3”, in this case, is known as the antecedent and the denominator, “4”, in this case, is known as the consequent.
- Equivalent Ratios Let us divide a Pizza into 8 equal parts and share it between Ram and Sam in the ratio 2:6. The ratio 2:6 can be written as 2/6;2/6 = 1/3 We know that 2/6 and 1/3 are called equivalent fractions. Similarly we call the ratios 2:6 and 1:3 as equivalent ratios.
From a given ratio x : y, we can get equivalent ratios by multiplying the terms ‘x’ and ‘ y ‘by the same non-zero number.
1 : 3 = 2 : 6 = 3 : 9
4 : 5 = 12 : 15 = 16 : 20
- Ratio and Proportion Problems and Solutions
Example 1: Write any 4 equivalent ratios for 4 : 3.
Sol: Given Ratio = 4 : 3. The ratio in fractional form = 4/3, we can get equivalent ratios by “4”and “3” by 2, 3, 4, 5 and get the equivalent fractions of 4/3 are 8/6, 12/9, 16/12, 20/15,
∴ The equivalent ratios of 4 : 3 are 8 : 6, 12 : 9, 16 : 12, 20 : 15
Example 2: Distribute Rs. 320 in the ratio 1 : 3.
Sol: 1 : 3 means the first quantity is 1 part and the second quantity in 3 parts.
The total number of parts = 1 + 3 = 4. As 4 parts = Rs. 320
∴ 1 part = 320/4 = 80 ∴ 3 parts = 3 × 80 = Rs. 240
- If a : b is a ratio then:
✔ Duplicate ratio of (a : b) is (a2 : b2).
✔ Sub-duplicate ratio of (a : b) is (a1/2 : b1/2).
✔ Triplicate ratio of (a : b) is (a3 : b3).
✔ Sub-triplicate ratio of (a : b) is (a1/3 : b1/3).
Example 3: What is the duplicate ratio of 2 : 3?
Sol: Duplicate ratio of 2 : 3 = 22 : 32 = 4 : 9.
Example 4: Triplicate ratio of two numbers is 27 : 64. Find their duplicate ratio.
Sol: Triplicate ratio of two numbers is 27 : 64, so numbers should be 271/3 : 641/3 So numbers are in the ratio 3 : 4. So duplicate ratio of 3 : 4 = 32 : 42 = 9 : 16.
Example 5: The ratio of two numbers is 25 : 36. Find their sub duplicate ratio.
Sol: Sub duplicate ratio of 25 : 36 = 251/2 : 361/2 = 5 : 6.
Proportion is represented by the symbol ‘= ‘or ‘:: ‘
If the ratio a : b is equal to the ratio c : d, then a, b, c, d are said to be in proportion.
Using symbols we write as a : b = c : d or a : b :: c : d
- When 4 terms in proportion, then the product of the two extremes (i.e. the first and the fourth value) should be equal to the product of two middle values (i.e. the second and the third value)
Example 6: Prove that 16 : 12 and 4 : 3 are in proportion.
Sol: The product of the means = 12 × 4 = 48. The product of the extremes = 16 × 3 = 48
As Product of Means = Product of Extremes ∴ 16 : 12, 4 : 3 are in proportion.
Example 7: Find the missing number in 3 : 4 = 12 : ____
Sol: Let the missing number is “a”. We know that, Product of means = Product of extremes.
Therefore 3 × a = 4 × 12; By dividing both sides by 3, we get the missing term = (4 × 12)/3 = 16
Example 8: Taking 4 and 16 are means, write any two proportions.
Sol: Given 4 and 16 are means. So, __: 4 = 16: __
The product of Means is 4 × 16 = 64. Hence the product of Extremes must also be 64
64 can be written as 4 × 16 or 2× 32 etc. Two proportions are 2: 4:: 16 : 32 and16 : 4 :: 16 : 4.
- FOURTH PROPORTIONAL:
If a : b = c : d, then d is called the fourth proportional to a, b, c.
Example 9: Find the fourth proportional of the numbers 12, 48, 16.
Sol: Let fourth proportional is x. Now as per the concept above the product of extremes should be equal to the product of the means → 12/48 = 16/x → x = 64.
- THIRD PROPORTIONAL: a : b = c : d, then c is called the third proportion to a and b.
Must Read Ratio and Proportion Articles
Example 10: If 2, 5, x, 30 are in proportion, find the third proportional “x”.
Sol: Here x is third proportional. According to the concept 2/5 = x/30 → x = 12.
- MEAN PROPORTIONAL: Mean proportional between a and b is √ab .
Example 11: Find the mean proportional of the numbers 10 and 1000.
Sol: Mean proportional between a and b is √ab. Let the mean proportional of 10 and 1000 be x.
So x = √10x1000 = √10000 = 100.
- CONTINUED PROPORTION a, b, c are in Continued Proportion if a : b = b : c. Here b is called the Mean Proportional and is equal to the square root of the product of a and c.b2 = a × c → b = √ac
- a/b = b/c = c/d etc., then a, b, c, d are in Geometric Progression.
Let a/b = b/c = c/d = k, then, c = dk; b = ck and a = bk
Since c = dk, b = dk × k = dk2 and a = bk = dk2 × k = dk3, implying they are in Geometric Progression.
If the three ratios, a : b, b : c, c : d are known, we can find a : d by the multiplying these three ratios
a/d = a/b × b/c × c/d
- If a/b = c/d= e/f , then each of these ratios is equal to (a+c+e)/(b+d+f)
- If a/b = c/d, then b/a = d/c(Invertendo)
- If a/b = c/d, then a/c = b/d (Alternendo)
- If a/b = c/d , then (a+b)/b = (c+d)/d(Componendo)
- If a/b = c/d, then (a-b)/b = (c-d)/d (Dividendo)
- If a/b = c/d, then (a+b)/(a-b) = (c+d)/(c-d), (Componendo & dividendo)
Example 12: If a : b = 2 : 5, then find the value of (3a + 4b) : (5a + 6b).
Sol: Let a = 2x & b = 5x. Then (3a + 4b): (5a + 6b) = (3 × 2x + 4 × 5x) : (5 × 2x + 6 × 5x) → 26x:40x = 13 : 20.
- DIRECT VARIATION Two quantities “x” and “y” are said to be in direct variation if an increase in one quantity results in increase in the other quantity and decrease in one results in decrease in the other quantity. If two quantities vary always in the same ratio, then they are in direct variation.
Examples for Direct Variation:
- Distance and Time are in Direct Variation, because more the distance travelled, the time taken will be more (if speed remains the same).
- Principal and Interest are in Direct Variation, because if the Principal is more, the Interest earned will also be more.
- Purchase of Articles and the amount spent are in Direct Variation, because purchase of more articles will cost more money. If two quantities “x” and “y” vary directly in such a way that x/y remains constant and is positive, and this constant is called the constant of variation. If x α y that means x = py where p is proportionality constant x/y = p, then ratio of any two values of “x” is equal to the ratio of corresponding values of “y” Then x1/x2 = x2/y2.
Example 13: Sam takes 2 hours to cover 40 km. Find the distance he will travel in 8 hours.
Sol: Let distance covered = y. When time increases the distance also increases. Therefore, they are in direct variation, 2 : 8 = 40 : y → y = (40 × 8)/2 = 160 km. Sam will travel 160 km in 8 hours.
Example 14: The purchase price of 15 articles is Rs 4500. Find number of articles purchased for Rs. 1500.
Sol: Let articles purchased = x. When amount spent decreases, then number of articles also decreases. So they are in direct variation → 15 : x = 4500 : 1500 → x = (15 × 1500) / 4500 = 5
Example 15: The cost of 10 kg sugar is Rs 360. Find the cost of 18.5 kg sugar.
Sol: Let the cost is Rs. X. When quantity increases, cost also increases. So they are in direct variation → 10/18.5 = 360/X → X = 666
Examples for Inverse Variation:
- INVERSE VARIATION:
If two quantities “x” and “y” are such that an increase or decrease in “x” leads to a corresponding decrease or increase in “y” in the same ratio, then we can say they vary indirectly or the variation is inverse. Suppose 6 men can do a piece of work in 18 days, then 12 men can do the same job in 9 days. That means if we double the number of men, then number of days get halved. That means there is inverse relation between number of men and number of days.
In general, when two variables x and y are such that xy = k where k is a non-zero constant, we say that y varies inversely with x. In notation, inverse variation is written as y α 1/x → y = p/x, where p is constant of proportionality → xy = p. So x1y1 = x2y2.
- Work and Time are in Inverse Variation, because more the number of the workers, lesser will be the time required to complete a job.
- Speed and Time are in Inverse Variation, because higher the speed, the lower is the time taken to cover a distance.
- Population and Quantity of food are in Inverse Variation, because if the population increases, the food availability decreases.
Example 16: Suppose that y varies inversely as x and that y = 12 when x = 6.
a) Form an equation connecting x and y.
b) Calculate the value of y when x = 18.
Sol: x and y are in inverse proportion. So x1y1= x2y2 → 6 × 12 = 18 × y → y = 4 |
Download Multiplying Fractions with Whole Numbers with Word Problems (with denominators from 2 to 6) Worksheets
Click the button below to get instant access to these premium worksheets for use in the classroom or at a home.
This worksheet can be edited by Premium members using the free Google Slides online software. Click the Edit button above to get started.
Download free sample
Not ready to purchase a subscription yet? Click here to download a FREE sample of this worksheet pack.
Multiplying fractions with whole numbers is a process of adding all the given fractions as many times as what the whole number requires.
- Multiplication is one among the four basic operations, the three being addition, subtraction and division.
- The equation is composed of multiplicand, multiplier and product.
- It implies repeated addition of the multiplicand (to itself) as many times as what the multiplier implies.
MULTIPLICATION OF FRACTIONS WITH WHOLE NUMBERS
The process of multiplying a fraction and a whole number may be explained similarly with multiplying two fractions. See the example below.
Given: ½ x 5 = ?
Step 1: Express the whole number as a fraction. Whole numbers have “imaginary” denominator equal to 1.
Step 2: Multiply the parts of the fraction. Multiply the numerators and denominators individually.
Step 3: SImplify if possible. Improper fractions shall always be transformed to mixed numbers for final answer.
One way to check the answer is by adding the fraction as many times as the value of the whole number.
One way to determine the product of a fraction and a whole number is through visual representation.
TRANSLATING WORD PROBLEMS
When a word problem is given, one can tell that it implies multiplication with the help of clue words. Some of these clue words are listed below.
- “one-half times five”
- “one-half multiplied by five”
- “product of one-half and five”
- “five groups of one-half”
All these translates to: “½ x 5” or vice versa, if commutative property of multiplication is applied
Multiplying Fractions with Whole Numbers with Word Problems (with denominators from 2 to 6) Worksheets
This is a fantastic bundle which includes everything you need to know about Multiplying Fractions with Whole Numbers with Word Problems (with denominators from 2 to 6) across 15+ in-depth pages. These are ready-to-use Common core aligned Grade 4 Math worksheets.
Each ready to use worksheet collection includes 10 activities and an answer guide. Not teaching common core standards? Don’t worry! All our worksheets are completely editable so can be tailored for your curriculum and target audience.
Click any of the example images below to view a larger version. |
Origin and evolution of the hydrosphere
It is not very likely that the total amount of water at the Earth’s surface has changed significantly over geologic time. Based on the ages of meteorites, the Earth is thought to be 4.6 billion years old. The oldest rocks known date 3.8 billion years in age, and these rocks, though altered by post-depositional processes, show signs of having been deposited in an environment containing water. There is no direct evidence for water for the period between 4.6 and 3.8 billion years ago. Thus, ideas concerning the early history of the hydrosphere are closely linked to theories about the origin of the Earth.
The Earth is thought to have accreted from a cloud of ionized particles around the Sun. This gaseous matter condensed into small particles that coalesced to form a protoplanet, which in turn grew by the gravitational attraction of more particulates. Some of these particles had compositions similar to that of carbonaceous chondrite meteorites, which may contain up to 20 percent water. Heating of this initially cool, unsorted conglomerate by the decay of radioactive elements and the conversion of kinetic and potential energy to heat resulted in the development of the Earth’s liquid iron core and the gross internal zonation of the planet (i.e., differentiation into core, mantle, and crust). It has been concluded that the Earth’s core formed over a period of about 500 million years. It is likely that core formation resulted in the escape of an original primitive atmosphere and its replacement by one derived from the loss of volatile substances from the planetary interior.
At an early stage the Earth thus did not have water or water vapour at its surface. Once the planet’s surface had cooled sufficiently, water contained in the minerals of the accreted material and released at depth could escape to the surface and, instead of being lost to space, cooled and condensed to form the initial hydrosphere. A large, cool Earth most certainly served as a better trap for water than a small, hot body because the lower the temperature, the less likelihood for water vapour to escape, and the larger the Earth, the stronger its gravitational attraction for water vapour. Whether most of the degassing took place during core formation or shortly thereafter or whether there has been significant degassing of the Earth’s interior throughout geologic time remains uncertain. It is likely that the hydrosphere attained its present volume early in the Earth’s history, and since that time there have been only small losses and gains. Gains would be from continuous degassing of the Earth; the present degassing rate of juvenile water has been determined as being only 0.3 cubic kilometre per year. Water loss in the upper atmosphere is by photodissociation, the breakup of water vapour molecules into hydrogen and oxygen due to the energy of ultraviolet light. The hydrogen is lost to space and the oxygen remains behind. Only about 4.8 × 10−4 cubic kilometre of water vapour is presently destroyed each year by photodissociation. This low rate can be readily explained: the very cold temperatures of the upper atmosphere result in a cold trap at an altitude of about 15 kilometres, where most of the water vapour condenses and returns to lower altitudes, thereby escaping photodissociation. Since the early formation of the hydrosphere, the amount of water vapour in the atmosphere has been regulated by the temperature of the Earth’s surface—hence its radiation balance. Higher temperatures imply higher concentrations of atmospheric water vapour, while lower temperatures suggest lower atmospheric levels.
The early hydrosphere
The gases released from the Earth during its early history, including water vapour, have been called excess volatiles because their masses cannot be accounted for simply by rock weathering. These volatiles are thought to have formed the early atmosphere of the Earth. At an initial crustal temperature of about 600° C, almost all of these compounds, including H2O, would have been in the atmosphere. The sequence of events that occurred as the crust cooled is difficult to reconstruct. Below 100° C all of the water would have condensed, and the acid gases would have reacted with the original igneous crustal minerals to form sediments and an initial hydrosphere that was dominated by a salty ocean. If the reaction rates are assumed to have been slow relative to cooling, an atmosphere of 600° C would have contained, together with other compounds, water vapour, carbon dioxide, and hydrogen chloride (HCl) in a ratio of 20:3:1 and cooled to the critical temperature of water (i.e., 374° C). The water therefore would have condensed into an early hot ocean. At this stage, the hydrogen chloride would have dissolved in the ocean (about one mole per litre), but most of the carbon dioxide would have remained in the atmosphere, with only about 0.5 mole per litre in the ocean water. This early acid ocean would have reacted vigorously with crustal minerals, dissolving out silica and cations and creating a residue composed principally of aluminous clay minerals that would form the sediments of the early ocean basins.
This is one of several possible pathways for the early surface of the Earth. Whatever the actual case, after the Earth’s surface had cooled to 100° C, it would have taken only a short time for the remaining acid gases to be consumed in reactions involving igneous rock minerals. The presence of cyanobacteria (e.g., blue-green algae) in the fossil record of rocks older than three billion years attests to the fact that the Earth’s surface had cooled to temperatures lower than 100° C by this time, and neutralization of the original acid volatiles had taken place. It is possible, however, that, because of increased greenhouse gas concentrations (see below) in the Early Archean era (about 3.8 to 3.4 billion years ago), the Earth’s surface could still have been warmer than today.
Test Your Knowledge
If most of the degassing of primary volatile substances from the Earth’s interior occurred early, the chloride released by the reaction of hydrochloric acid with rock minerals would be found in the oceans or in evaporite deposits, and the oceans would have a salinity and volume comparable to that of today. This conclusion is based on the assumption that there has been no drastic change in the ratios of volatiles released through geologic time. The overall generalized reaction indicative of the chemistry leading to the formation of the early oceans can be written in the form: primary igneous rock minerals + acid volatiles + H2O → sedimentary rocks + oceans + atmosphere. It should be noted from this equation that, if all the acid volatiles and H2O were released early in the history of the Earth and in the proportions found today, then the total original sedimentary rock mass-produced would be equal to that of the present, and ocean salinity and volume would be close to those of today as well. If, on the other hand, degassing were linear with time, then the sedimentary rock mass would have accumulated at a linear rate, as would have oceanic volume. The salinity of the oceans, however, would remain nearly the same if the ratios of volatiles degassed did not change with time. The most likely situation is the one presented here—namely, that major degassing occurred early in Earth’s history, after which minor amounts of volatiles were released episodically or continuously for the remainder of geologic time. The salt content of the oceans based on the constant proportions of volatiles released would depend primarily on the ratio of sodium chloride locked up in evaporites to that dissolved in the oceans. If all the sodium chloride in evaporites were added to the oceans today, the salinity would be approximately doubled. This value gives a sense of the maximum salinity that the oceans could have attained throughout geologic time.
One component absent from the early Earth’s surface was free oxygen; it would not have been a constitutent released from the cooling crust. Early production of oxygen was by the photodissociation of water in the Earth’s atmosphere, a process that was triggered by the absorption of the Sun’s ultraviolet radiation. The reaction is in which hν represents the photon of ultraviolet light. The hydrogen produced would escape into space, while the oxygen would react with the early reduced gases by reactions such as 2H2S + 3O2 → 2SO2 + 2H2O. Oxygen production by photodissociation gave the early reduced atmosphere a start toward present-day conditions, but it was not until the appearance of photosynthetic organisms approximately three billion years ago that oxygen could accumulate in the Earth’s atmosphere at a rate sufficient to give rise to today’s oxygenated environment. The photosynthetic reaction leading to oxygen production is given in equation (6).
The transitional hydrosphere
The nature of the rock record from the time of the first sedimentary rocks (approximately 3.8 billion years ago) to about one to two billion years ago suggests that the amount of oxygen in the Earth’s atmosphere was significantly lower than it is today and that there were continuous chemical trends in the sedimentary rocks formed and, more subtly, in the composition of the hydrosphere. The chemistry of rocks shifted dramatically during this transitional period. The source rocks of sediments during this time may have been more basaltic than subsequent ones. Sedimentary debris was formed by the alteration of such source rocks in an oxygen-deficient atmosphere and accumulated primarily under anaerobic marine conditions. The chief difference between reactions involving mineral–ocean equilibria at this time and the present day was the role played by ferrous iron (i.e., reduced state of iron). The concentration of dissolved iron in modern oceans is low because of the insolubility of oxidized iron oxides. During the transition stage and earlier, oxygen-deficient environments were prevalent, and these favoured the formation of minerals containing ferrous iron from the alteration of rocks slightly more rich in basalt than those of today. Indeed, iron carbonate siderite and iron silicate greenalite, in close association with chert and iron sulfide pyrite, are characteristic minerals that occur in iron formations of the middle Precambrian (about 2.4 to 1.5 billion years ago). The chert originally was deposited as amorphous silica; equilibrium between amorphous silica, siderite, and greenalite at 25° C and a total pressure of one atmosphere requires a carbon dioxide pressure of about 10−2.5 atmosphere, or 10 times the present-day value.
The oceans of this transitional period can be thought of as a solution that resulted from an acid leach of basaltic rocks, and, because the neutralization of the volatile acid gases was not restricted primarily to land areas as it is today, much of this alteration may have occurred by submarine processes. Anaerobic depositional environments with internal carbon dioxide pressures of about 10−2.5 atmosphere prevailed, and the oxygen-deficient atmosphere itself may have had a carbon dioxide pressure close to 10−2.5 atmosphere. If so, the pH of early ocean water was lower than that of modern seawater and the calcium concentration was higher; moreover, the early ocean water was probably saturated with respect to amorphous silica—roughly 120 ppm.
To simulate what might have occurred, it is helpful to imagine emptying the Pacific basin, throwing in great masses of broken basaltic material, filling it with hydrogen chloride dissolved in water so that the acid becomes neutralized, and then carbonating the solution by bubbling carbon dioxide through it. Oxygen would not be permitted into the system. The hydrochloric acid would leach the rocks, resulting in the release and precipitation of silica and the production of a chloride ocean containing sodium, potassium, calcium, magnesium, aluminum, iron, and reduced sulfur species in the proportions present in the rocks. As complete neutralization was approached, the aluminum could begin to precipitate as hydroxides and then combine with precipitated silica to form cation-deficient aluminosilicates. As the neutralization process reached its end, the aluminosilicates would combine with more silica and with cations to form such minerals as chlorite, and ferrous iron would combine with silica and sulfur to produce greenalite and pyrite. In the final solution, chlorine would be balanced by sodium and calcium in roughly equal proportions, with subordinate amounts of potassium and magnesium; aluminum would be quantitatively removed, and silicon would be at saturation with amorphous silica. If this solution were then carbonated, calcium would be removed as calcium carbonate, and the chlorine balance would be maintained by abstraction of more sodium from the primary rock. The sediments formed in this system would contain chiefly silica, ferrous iron silicates, chloritic minerals, calcium carbonate, calcium-magnesium carbonates, and small amounts of pyrite.
If the hydrogen chloride added were in excess of the carbon dioxide, the resultant oceans would have a high content of calcium chloride (CaCl2), but with a pH still near neutrality. If the carbon dioxide added were in excess of the chlorine, calcium would be precipitated as carbonate until it reached a level roughly that of present-day ocean waters—namely, a few hundred parts per million.
If this newly created ocean were left undisturbed for several hundred million years, its waters would evaporate and be transported onto the continents (in the form of precipitation); streams would transport their loads into it. The sediments produced in this ocean would be uplifted and incorporated into the continents. The influence of the continental debris would gradually be felt and the pH might change somewhat. Iron would be oxidized out of the ferrous silicates to yield iron oxides, but the composition of the water would not vary substantially.
The primary minerals of igneous rocks are all mildly basic compounds. When these minerals react in excess with acids such as hydrogen chloride and carbon dioxide, they produce neutral or mildly alkaline solutions as well as a set of altered aluminosilicate and carbonate reaction products. It is improbable that seawater has changed through time from a solution approximately in equilibrium with these reaction products—i.e., with clay minerals and carbonates.
The modern hydrosphere
It is likely that the hydrosphere achieved its modern chemical characteristics about 1.5 to two billion years ago. The chemical and mineralogical compositions and the relative proportions of sedimentary rocks of this age differ little from their counterparts of the Paleozoic era (from 540 to 245 million years ago). Calcium sulfate deposits of late Precambrian age (about 1.5 billion to 540 million years ago) attest to the fact that the acid sulfur gases had been neutralized to sulfate by this time. Chemically precipitated ferric oxides in late Precambrian sedimentary rocks indicate available free oxygen, whatever its percentage. The chemistry and mineralogy of middle and late Precambrian shales are similar to those of Paleozoic shales. The carbon isotopic signature of carbonate rocks has been remarkably constant for more than three billion years, indicating exceptional stability in size and fluxes related to organic carbon. The sulfur isotopic signature of sulfur phases in rocks strongly suggests that the sulfur cycle involving heterotrophic bacterial reduction of sulfate was in operation 2.7 billion years ago. It therefore appears that continuous cycling of sediments similar to those of today has occurred for 1.5 to two billion years and that these sediments have controlled hydrospheric, and particularly oceanic, composition.
It was once thought that the saltiness of the modern oceans simply represents the storage of salts derived from rock weathering and transported to the oceans by fluvial processes. With increasing knowledge of the age of the Earth, however, it was soon realized that, at the present rate of delivery of salts to the ocean or even at much reduced rates, the total salt content and the mass of individual salts in the oceans could be attained in geologically short time intervals compared to the planet’s age. The total mass of salt in the oceans can be accounted for at today’s rates of stream delivery in about 12 million years. The mass of dissolved silica in ocean water can be doubled in just 20,000 years by the addition of stream-derived silica; to double the sodium content would take 70 million years. It then became apparent that the oceans were not merely an accumulator of salts; rather, as water evaporated from the oceans, together with some salt, the salts introduced must be removed in the form of minerals deposited in sediments. Accordingly, the concept of the oceans as a chemical system changed from that of a simple accumulator to that of a steady-state system in which rates of inflow of materials equal rates of outflow. The steady-state concept permits influx to vary with time, but the inflow would be matched by nearly simultaneous and equal variation of efflux.
In recent years, this steady-state conceptual view of the oceans has undergone some modification. In particular, it has been found necessary to treat components of ocean water in terms of all their influxes and effluxes and to be more cognizant of the time scale of application of the steady-state concept. Indeed, the recent increase in the carbon dioxide concentration of the atmosphere due to the burning of fossil fuels may induce a change in the pH and dissolved inorganic carbon concentrations of surface ocean water on a time scale measured in hundreds of years. If fossil-fuel burning were to cease, return to the original state of seawater composition could take thousands of years. Ocean water is not in steady state with respect to carbon on these time scales, but on a longer geologic time scale it certainly could be. Even on this longer time scale, however, oceanic composition has varied because of natural changes in the carbon dioxide level of the atmosphere and because of other factors.
It appears that the best description of modern seawater composition is that of a chemical system in a dynamic quasi-steady state. Changes in composition may occur over time, but the system always seems to return to a time-averaged, steady-state composition. In other words, since 1.5 to two billion years ago, evolutionary chemical changes in the hydrosphere have been small when viewed against the magnitude of previous change.
It should be noted that rivers supply dissolved constituents to the oceans, whereas high- and low-temperature reactions between seawater and submarine basalts and reactions in sediment pore waters may add or remove constituents from ocean water. Biological processes involved in the formation of the opaline silica skeletons of diatoms and radiolarians and the carbonate skeletons of planktonic foraminiferans and coccolithophorids chiefly remove calcium and silica from seawater. Exchange reactions between river-borne clays entering seawater are particularly significant for sodium and calcium ions. Most of the carbon imbalance in ocean water represents carbon released to the ocean–atmosphere system during precipitation of carbonate minerals—i.e.,
In the case of iron, it has been documented that “dissolved” iron carried by rivers is rapidly precipitated as hydroxides in the mixing zone with seawater and that the reduced dissolved iron released from anaerobic sediments also is rapidly precipitated under the oxic conditions (i.e., those with oxygen present) prevailing in the water column. Iron is also precipitated as iron smectites, hydrated iron oxides, and nontronite (iron-rich montmorillonite) in the deep sea. It is thus likely that iron is removed by these processes. |
An ethnic group is a group of people whose members identify with each other, through a common heritage that is real or assumed- sharing cultural characteristics This shared heritage may be based upon putative common ancestry, history, kinship, religion, language, shared territory, nationality or physical appearance. Members of an ethnic group are conscious of belonging to an ethnic group; moreover ethnic identity is further marked by the recognition from others of a group's distinctiveness.
According to "Challenges of Measuring an Ethnic World: Science, politics, and reality", a conference organised by Statistics Canada and the United States Census Bureau (April 1–3, 1992), "Ethnicity is a fundamental factor in human life: it is a phenomenon inherent in human experience." However, many social scientists, such as anthropologists Fredrik Barth and Eric Wolf, do not consider ethnic identity to be universal. They regard ethnicity as a product of specific kinds of inter-group interactions, rather than an essential quality inherent to human groups.
Processes that result in the emergence of such identification are called ethnogenesis. Members of an ethnic group, on the whole, claim cultural continuities over time. Historians and cultural anthropologists have documented, however, that often many of the values, practices, and norms that imply continuity with the past are of relatively recent invention.
According to Thomas Hylland Eriksen, the study of ethnicity was dominated by two distinct debates until recently. One is between "primordialism" and "instrumentalism". In the primordialist view, the participant perceives ethnic ties collectively, as an externally given, even coercive, social bond. The instrumentalist approach, on the other hand, treats ethnicity primarily as an ad-hoc element of a political strategy, used as a resource for interest groups for achieving secondary goals such as, for instance, an increase in wealth, power or status. This debate is still an important point of reference in Political science, although most scholars' approaches fall between the two poles.
The second debate is between "constructivism" and "essentialism". Constructivists view national and ethnic identities as the product of historical forces, often recent, even when the identities are presented as old. Essentialists view such identities as ontological categories defining social actors, and not the result of social action.
According to Eriksen, these debates have been superseded, especially in anthropology, by scholars' attempts to respond to increasingly politicised forms of self-representation by members of different ethnic groups and nations. This is in the context of debates over multiculturalism in countries, such as the United States and Canada, which have large immigrant populations from many different cultures, and post-colonialism in the Caribbean and South Asia.
Defining ethnicity Edit
The terms "ethnicity" and "ethnic group" are derived from the Greek word ethnos, normally translated as "nation" or commonly said people of the same race that share a distinctive culture. The term "ethnic" and related forms were used in English in the meaning of "pagan/ heathen" from the 14th century through the middle of the 19th century. ThiE. Tonkin, M. McDonald and M. Chapman, History and Ethnicity (London 1989), pp. 11-17 (quoted in J. Hutchinson & A.D. Smith (eds.), Oxford readers: Ethnicity (Oxford 1996), pp. 18-24)</ref>
The modern usage of "ethnic group", however, reflects the different kinds of encounters industrialised states have had with subordinate groups, such as immigrants and colonised subjects; "ethnic group" came to stand in opposition to "nation", to refer to people with distinct cultural identities who, through migration or conquest, had become subject to a foreign state. The modern usage of the word is relatively new—1851 — with the first usage of the term ethnic group in 1935, and entering the Oxford English Dictionary in 1972.
The modern usage definition of the Oxford English Dictionary' is:
- 2.a. Pertaining to race; peculiar to a race or nation; ethnological. Also, pertaining to or having common racial, cultural, religious, or linguistic characteristics, esp. designating a racial or other group within a larger system; hence (U.S. colloq.), foreign, exotic.
- b ethnic minority (group), a group of people differentiated from the rest of the community by racial origins or cultural background, and usu. claiming or enjoying official recognition of their group identity. Also attrib.
- 3 A member of an ethnic group or minority. orig. U.S.—Oxford English Dictionary "ethnic, a. and n."
Writing about the usage of the term "ethnic" in the ordinary language of Great Britain and the United States, Wallman notes that
- The term 'ethnic' popularly connotes 'race' in Britain, only less precisely, and with a lighter value load. In North America, by contrast, 'race' most commonly means color, and 'ethnics' are the descendents of relatively recent immigrants from non-English-speaking countries. 'Ethnic' is not a noun in Britain. In effect there are no 'ethnics'; there are only 'ethnic relations'.
Thus, in today's everyday language, the words "ethnic" and "ethnicity" still have a ring of exotic peoples, minority issues and race relations.
Within the social sciences, however, the usage has become more generalized to all human groups that explicitly regard themselves and are regarded by others as culturally distinctive. Among the first to bring the term "ethnic group" into social studies was the German sociologist Max Weber, who defined it as:
[T]hose human groups that entertain a subjective belief in their common descent because of similarities of physical type or of customs or both, or because of memories of colonization and migration; this belief must be important for group formation; furthermore it does not matter whether an objective blood relationship exists.
Conceptual history of ethnicityEdit
Weber maintained that ethnic groups were künstlich (artificial, i.e. a social construct) because they were based on a subjective belief in shared Gemeinschaft (community). Secondly, this belief in shared Gemeinschaft did not create the group; the group created the belief. Third, group formation resulted from the drive to monopolise power and status. This was contrary to the prevailing naturalist belief of the time, which held that socio-cultural and behavioral differences between peoples stemmed from inherited traits and tendencies derived from common descent, then called "race".
Another influential theoretician of ethnicity was Fredrik Barth, whose "Ethnic Groups and Boundaries" from 1969 has been described as instrumental in spreading the usage of the term in social studies in the 1980s and 1990s. Barth went further than Weber in stressing the constructed nature of ethnicity. To Barth, ethnicity was perpetually negotiated and renegotiated by both external ascription and internal self-identification. Barth's view is that ethnic groups are not discontinuous cultural isolates, or logical a prioris to which people naturally belong. He wanted to part with anthropological notions of cultures as bounded entities, and ethnicity as primordialist bonds, replacing it with a focus on the interface between groups. "Ethnic Groups and Boundaries", therefore, is a focus on the interconnectedness of ethnic identities. Barth writes: "[...] categorical ethnic distinctions do not depend on an absence of mobility, contact and information, but do entail social processes of exclusion and incorporation whereby discrete categories are maintained despite changing participation and membership in the course of individual life histories."
In 1978, anthropologist Ronald Cohen claimed that the identification of "ethnic groups" in the usage of social scientists often reflected inaccurate labels more than indigenous realities:
... the named ethnic identities we accept, often unthinkingly, as basic givens in the literature are often arbitrarily, or even worse inaccurately, imposed.In this way, he pointed to the fact that identification of an ethnic group by outsiders, e.g. anthropologists, may not coincide with the self-identification of the members of that group. He also described that in the first decades of usage, the term ethnicity had often been used in lieu of older terms such as "cultural" or "tribal" when referring to smaller groups with shared cultural systems and shared heritage, but that "ethnicity" had the added value of being able to describe the commonalities between systems of group identity in both tribal and modern societies. Cohen also suggested that claims concerning "ethnic" identity (like earlier claims concerning "tribal" identity) are often colonialist practices and effects of the relations between colonized peoples and nation-states.
Social scientists have thus focused on how, when, and why different markers of ethnic identity become salient. Thus, anthropologist Joan Vincent observed that ethnic boundaries often have a mercurial character. Ronald Cohen concluded that ethnicity is "a series of nesting dichotomizations of inclusiveness and exclusiveness". He agrees with Joan Vincent's observation that (in Cohen's paraphrase) "Ethnicity ... can be narrowed or broadened in boundary terms in relation to the specific needs of political mobilization. This may be why descent is sometimes a marker of ethnicity, and sometimes not: which diacritic of ethnicity is salient depends on whether people are scaling ethnic boundaries up or down, and whether they are scaling them up or down depends generally on the political situation.
Ethnies and ethnic categoriesEdit
In order to avoid the problems of defining ethnic classification as labelling of others or as self-identification, it has been proposed to distinguish between concepts of "ethnic categories", "ethnic networks" and "ethnic communities" or "ethnies".
- An "ethnic category" is a category set up by outsiders, that is, those who are not themselves members of the category, and whose members are populations that are categorised by outsiders as being distinguished by attributes of a common name or emblem, a shared cultural element and a connection to a specific territory. But, members who are ascribed to ethnic categories do not themselves have any awareness of their belonging to a common, distinctive group.
- At the level of "ethnic networks", the group begins to have a sense of collectiveness, and at this level, common myths of origin and shared cultural and biological heritage begins to emerge, at least among the élites.
- At the level of "ethnies" or "ethnic communities", the members themselves have clear conceptions of being "a named human population with myths of common ancestry, shared historical memories, and one or more common elements of culture, including an association with a homeland, and some degree of solidarity, at least among the élites". That is, an ethnie is self-defined as a group, whereas ethnic categories are set up by outsiders whether or not their own members identify with the category given them.
- A "Situational Ethnicity" is an Ethnic identity that is chosen for the moment based on the social setting or situation.
Approaches to understanding ethnicityEdit
Different approaches to understanding ethnicity have been used by different social scientists when trying to understand the nature of ethnicity as a factor in human life and society. Examples of such approaches are: primordialism, essentialism, perennialism, constructivism, modernism and instrumentalism.
- "Primordialism", holds that ethnicity has existed at all times of human history and that modern ethnic groups have historical continuity into the far past. For them, the idea of ethnicity is closely linked to the idea of nations and is rooted in the pre-Weber understanding of humanity as being divided into primordially existing groups rooted by kinship and biological heritage.
- "Essentialist primordialism" further holds that ethnicity is an a priori fact of human existence, that ethnicity precedes any human social interaction and that it is basically unchanged by it. This theory sees ethnic groups as natural, not just as historical. This understanding does not explain how and why nations and ethnic groups seemingly appear, disappear and often reappear through history. It also has problems dealing with the consequences of intermarriage, migration and colonization for the composition of modern day multi-ethnic societies.
- "Kinship primordialism" holds that ethnic communities are extensions of kinship units, basically being derived by kinship or clan ties where the choices of cultural signs (language, religion, traditions) are made exactly to show this biological affinity. In this way, the myths of common biological ancestry that are a defining feature of ethnic communities are to be understood as representing actual biological history. A problem with this view on ethnicity is that it is more often than not the case that mythic origins of specific ethnic groups directly contradict the known biological history of an ethnic community.
- "Geertz's primordialism", notably espoused by anthropologist Clifford Geertz, argues that humans in general attribute an overwhelming power to primordial human "givens" such as blood ties, language, territory, and cultural differences. In Geertz' opinion, ethnicity is not in itself primordial but humans perceive it as such because it is embedded in their experience of the world.
- "Perennialism" holds that ethnicity is ever changing, and that while the concept of ethnicity has existed at all times, ethnic groups are generally short lived before the ethnic boundaries realign in new patterns. The opposing perennialist view holds that while ethnicity and ethnic groupings has existed throughout history, they are not part of the natural order.
- "Perpetual perennialism" holds that specific ethnic groups have existed continuously throughout history.
- "Situational perennialism" holds that nations and ethnic groups emerge, change and vanish through the course of history. This view holds that the concept of ethnicity is basically a tool used by political groups to manipulate resources such as wealth, power, territory or status in their particular groups' interests. Accordingly, ethnicity emerges when it is relevant as means of furthering emergent collective interests and changes according to political changes in the society. Examples of a perennialist interpretation of ethnicity are also found in Barth,and Seidner who see ethnicity as ever-changing boundaries between groups of people established through ongoing social negotiation and interaction.
- "Instrumentalist perennialism", while seeing ethnicity primarily as a versatile tool that identified different ethnics groups and limits through time, explains ethnicity as a mechanism of social stratification, meaning that ethnicity is the basis for a hierarchical arrangement of individuals. According to Donald Noel, a sociologist who developed a theory on the origin of ethnic stratification, ethnic stratification is a "system of stratification wherein some relatively fixed group membership (e.g., race, religion, or nationality) is utilized as a major criterion for assigning social positions". Ethnic stratification is one of many different types of social stratification, including stratification based on socio-economic status, race, or gender. According to Donald Noel, ethnic stratification will emerge only when specific ethnic groups are brought into contact with one another, and only when those groups are characterized by a high degree of ethnocentrism, competition, and differential power. Ethnocentrism is the tendency to look at the world primarily from the perspective of one's own culture, and to downgrade all other groups outside one’s own culture. Some sociologists, such as Lawrence Bobo and Vincent Hutchings, say the origin of ethnic stratification lies in individual dispositions of ethnic prejudice, which relates to the theory of ethnocentrism. Continuing with Noel's theory, some degree of differential power must be present for the emergence of ethnic stratification. In other words, an inequality of power among ethnic groups means "they are of such unequal power that one is able to impose its will upon another". In addition to differential power, a degree of competition structured along ethnic lines is a prerequisite to ethnic stratification as well. The different ethnic groups must be competing for some common goal, such as power or influence, or a material interest, such as wealth or territory. Lawrence Bobo and Vincent Hutchings propose that competition is driven by self-interest and hostility, and results in inevitable stratification and conflict.
- "Constructivism" sees both primordialist and perennialist views as basically flawed, and rejects the notion of ethnicity as a basic human condition. It holds that ethnic groups are only products of human social interaction, maintained only in so far as they are maintained as valid social constructs in societies.
- "Modernist constructivism" correlates the emergence of ethnicity with the movement towards nationstates beginning in the early modern period. Proponents of this theory, such as Eric Hobsbawm, argue that ethnicity and notions of ethnic pride, such as nationalism, are purely modern inventions, appearing only in the modern period of world history. They hold that prior to this, ethnic homogeneity was not considered an ideal or necessary factor in the forging of large-scale societies.
Ethnicity and race Edit
Before Weber, race and ethnicity were often seen as two aspects of the same thing. Around 1900 and before the essentialist primordialist understanding of ethnicity was predominant, cultural differences between peoples were seen as being the result of genetically inherited traits and tendencies. This was the time when "sciences" such as phrenology claimed to be able to correlate cultural and behavioral traits of different populations with their outward physical characteristics, such as the shape of the skull.
With Weber's introduction of ethnicity as a social construct, race and ethnicity were divided from each other. A social belief in biologically well-defined races lingered on. In 1950, the UNESCO statement, "The Race Question", signed by some of the internationally renowned scholars of the time (including Ashley Montagu, Claude Lévi-Strauss, Gunnar Myrdal, Julian Huxley, etc.), suggested that: "National, religious, geographic, linguistic and cultural groups do not necessarily coincide with racial groups: and the cultural traits of such groups have no demonstrated genetic connection with racial traits. Because serious errors of this kind are habitually committed when the term 'race' is used in popular parlance, it would be better when speaking of human races to drop the term 'race' altogether and speak of 'ethnic groups'."
In 1982, American cultural anthropologists, summing up forty years of ethnographic research, argued that racial and ethnic categories are symbolic markers for different ways that people from different parts of the world have been incorporated into a global economy:
- The opposing interests that divide the working classes are further reinforced through appeals to "racial" and "ethnic" distinctions. Such appeals serve to allocate different categories of workers to rungs on the scale of labor markets, relegating stigmatized populations to the lower levels and insulating the higher echelons from competition from below. Capitalism did not create all the distinctions of ethnicity and race that function to set off categories of workers from one another. It is, nevertheless, the process of labor mobilization under capitalism that imparts to these distinctions their effective values.
According to Wolf, races were constructed and incorporated during the period of European mercantile expansion, and ethnic groups during the period of capitalist expansion.
At present the prevailing understanding of race among social scientists is that it is, like ethnicity, a social construct. Often, ethnicity also connotes shared cultural, linguistic, behavioural or religious traits. For example, to call oneself Jewish or Arab is to immediately invoke a clutch of linguistic, religious, cultural and racial features that are held to be common within each ethnic category. Such broad ethnic categories have also been termed macroethnicity. This distinguishes them from smaller, more subjective ethnic features, often termed microethnicity.
Ethnicity and nation Edit
In some cases, especially involving transnational migration, or colonial expansion, ethnicity is linked to nationality. Anthropologists and historians, following the modernist understanding of ethnicity as proposed by Ernest Gellner and Benedict Anderson see nations and nationalism as developing with the rise of the modern state system in the seventeenth century. They culminated in the rise of "nation-states" in which the presumptive boundaries of the nation coincided (or ideally coincided) with state boundaries. Thus, in the West, the notion of ethnicity, like race and nation, developed in the context of European colonial expansion, when mercantilism and capitalism were promoting global movements of populations at the same time that state boundaries were being more clearly and rigidly defined. In the nineteenth century, modern states generally sought legitimacy through their claim to represent "nations." Nation-states, however, invariably include populations that have been excluded from national life for one reason or another. Members of excluded groups, consequently, will either demand inclusion on the basis of equality, or seek autonomy, sometimes even to the extent of complete political separation in their own nation-state. Under these conditions—when people moved from one state to another, or one state conquered or colonized peoples beyond its national boundaries—ethnic groups were formed by people who identified with one nation, but lived in another state.
Ethno-national conflict Edit
Sometimes ethnic groups are subject to prejudicial attitudes and actions by the state or its constituents. In the twentieth century, people began to argue that conflicts among ethnic groups or between members of an ethnic group and the state can and should be resolved in one of two ways. Some, like Jürgen Habermas and Bruce Barry, have argued that the legitimacy of modern states must be based on a notion of political rights of autonomous individual subjects. According to this view, the state should not acknowledge ethnic, national or racial identity but rather instead enforce political and legal equality of all individuals. Others, like Charles Taylor and Will Kymlicka, argue that the notion of the autonomous individual is itself a cultural construct. According to this view, states must recognize ethnic identity and develop processes through which the particular needs of ethnic groups can be accommodated within the boundaries of the nation-state.
The nineteenth century saw the development of the political ideology of ethnic nationalism, when the concept of race was tied to nationalism, first by German theorists including Johann Gottfried von Herder. Instances of societies focusing on ethnic ties, arguably to the exclusion of history or historical context, have resulted in the justification of nationalist goals. Two periods frequently cited as examples of this are the nineteenth century consolidation and expansion of the German Empire and the twentieth century Third (Greater German) Reich. Each promoted the pan-ethnic idea that these governments were only acquiring lands that had always been inhabited by ethnic Germans. The history of late-comers to the nation-state model, such as those arising in the Near East and south-eastern Europe out of the dissolution of the Ottoman and Austro-Hungarian Empires, as well as those arising out of the former USSR, is marked by inter-ethnic conflicts. Such conflicts usually occur within multi-ethnic states, as opposed to between them, as in other regions of the world. Thus, the conflicts are often misleadingly labelled and characterized as civil wars when they are inter-ethnic conflicts in a multi-ethnic state.
Ethnicity in specific countries Edit
In the United States of America, the term "ethnic" carries a much broader meaning than how it is commonly used in some other countries. Ethnicity usually refers to collectives of related groups, having more to do with morphology, specifically skin color, rather than political boundaries. The word "nationality" is more commonly used for this purpose (e.g. Italian, German, French, Russian, Japanese, etc. are nationalities). Most prominently in the U.S., Latin American derived populations are grouped in a "Hispanic" or "Latino" ethnicity. The many previously designated Oriental ethnic groups are now classified as the Asian racial group for the census.
The terms "Black" and "African American", while different, are both used as ethnic categories in the US. In the late 1980s, the term "African American", was posited as the most appropriate and politically correct race designation. While it was intended as a shift away from the racial inequities of America's past often associated with the historical views of the "Black race", it largely became a simple replacement for the terms Black, Colored, Negro and the like, referring to any individual of dark skin color regardless of geographical descent. Likewise, Light-skinned Americans from Africa are not considered "African American". Many African Americans are multiracial. More than half of African Americans also have European ancestry equivalent to one great-grandparent, and 5 percent have Native American ancestry equivalent to one great-grandparent.
The term "White" generally describes people whose ancestry can be traced to Europe (including other European-settled countries such as Argentina, Mexico, Australia, Brazil, Canada and Cuba) and who now live in the United States. However, due to the caucasoid origins of Middle Easterners, who in fact have darker skin than many mongoloid East Asians, they may sometimes also be included in the "white" category. This includes people from Southwest Asia and North Africa, as well as the Arab nations, Iran, and Afghanistan. All the aforementioned are categorized as part of the "White" racial group, as per US Census categorization. This category has been split into two groups: Hispanics and non-Hispanics (e.g. White non-Hispanic and White Hispanic.) Although people from East Asia may typically have lighter skin than Middle Easterners and Arabs, they are not considered "white" due to their mongoloid origin, which reflects upon the socially-constructed nature of racial groups.
In the United Kingdom, many different ethnic classifications, both formal and informal, are used. Perhaps the most accepted is the National Statistics classification, identical to that used in the 2001 Census in England and Wales (see Ethnicity (United Kingdom)). The classification White British is used to refer to the indigenous British people. The term Oriental refers to people from China, Japan, Korea and the Pacific Rim while Asian is used to refer to people from the Indian subcontinent; India, Pakistan and Bangladesh.
China officially recognizes 56 ethnic groups, the largest of which is the Han Chinese. Many of the ethnic minorities maintain their own cultures, languages and identity although many are also becoming more westernised. Han-Chinese predominate demographically and politically in most areas of China, although less so in Tibet and Xinjiang, where the Han are still in the minority. The Han Chinese was the only ethnic group bound by the one-child policy. (For more details, see List of ethnic groups in China and Ethnic minorities in China.)
In France, the government does not collect population census data with ethnic categories. In recognition of abuses when the French cooperated in the deportation of Jews under the Nazi Occupation, the legislature passed laws preventing the government from collecting, maintaining or using ethnic population statistics. Under the administration of Nicolas Sarkozy, the French government in 2008 began a legislative process to repeal this prohibition.
In India, ethnic categories are not recognized by the government. The population is categorized in terms of the 1,652 mother tongues spoken and/or the 645 scheduled tribes to which individuals belong.
See also Edit
- ^ Smith 1987
- ^ Marcus Banks, Ethnicity: Anthropological Constructions (1996), p. 151 "'ethnic groups' invariably stress common ancestry or endogamy".
- ^ "Anthropology. The study of ethnicity, minority groups, and identity," Encyclopaedia Britannica, 2007.
- ^ Bulmer, M. (1996). "The ethnic group question in the 1991 Census of Population". In Coleman, D and Salt, J.. Ethnicity in the 1991 Census of Population. HMSO. pp. 35.
- ^ Statistics Canada
- ^ Fredrik Barth ed. 1969 Ethnic Groups and Boundaries: The Social Organization of Cultural Difference; Eric Wolf 1982 Europe and the People Without History p. 381
- ^ Hobsbawm and Ranger (1983), The Invention of Tradition, Sider 1993 Lumbee Indian Histories.
- ^ Seidner,(1982), Ethnicity, Language, and Power from a Psycholinguistic Perspective, pp. 2-3
- ^ Geertz, Clifford, ed. (1967) Old Societies and New States: The Quest for Modernity in Africa and Asia. New York: The Free Press.
- ^ Cohen, Abner (1969) Custom and Politics in Urban Africa: A Study of Hausa Migrants in a Yoruba Town. London: Routledge & Kegan Paul.
- ^ Abner Cohen (1974) Two-Dimensional Man: An essay on power and symbolism in complex society. London: Routledge & Kegan Paul.
- ^ J. Hutchinson & A.D. Smith (eds.), Oxford readers: Ethnicity (Oxford 1996), "Introduction", 8-9
- ^ Gellner, Ernest (1983) Nations and Nationalism. Oxford: Blackwell.
- ^ Ernest Gellner (1997) Nationalism. London: Weidenfeld & Nicolson.
- ^ Smith, Anthony D. (1986) The Ethnic Origins of Nations. Oxford: Blackwell.
- ^ Anthony Smith (1991) National Identity. Harmondsworth: Penguin.
- ^ T.H. Eriksen "Ethnic identity, national identity and intergroup conflict: The significance of personal experiences" in Ashmore, Jussim, Wilder (eds.): Social identity, intergroup conflict, and conflict reduction, pp. 42–70. Oxford: Oxford University Press'. 2001
- ^ Oxford English Dictionary Second edition, online version as of 2008-01-12, "ethnic, a. and n.". Cites Sir Daniel Wilson, The archæology and prehistoric annals of Scotland 1851' (1863)
- ^ Oxford English Dictionary Second edition, online version as of 2008-01-12, "ethnic, a. and n.". Citing Huxley & Hadden (1935), We Europeans, pp. 136,181
- ^ Cohen, Ronald. (1978) "Ethnicity: Problem and Focus in Anthropology", Ann. Rev. Anthropol. 1978. 7:379-403
- ^ Glazer, Nathan and Daniel P. Moynihan (1975) Ethnicity - Theory and Experience, Cambridge, Mass. Harvard University Press
- ^ Oxford English Dictionary Second edition, online version as of 2008-01-12, "ethnic, a. and n."
- ^ Wallman, S. "Ethnicity research in Britain", Current Anthropology, v. 18, n. 3, 1977, pp. 531–532.
- ^ Eriksen 1993 p. 2
- ^ Max Weber 1978 Economy and Society eds. Guenther Roth and Claus Wittich, trans. Ephraim Fischof, vol. 2 Berkeley: University of California Press, 389
- ^ Banton, Michael. (2007) "Weber on Ethnic Communities: A critique", Nations and Nationalism 13 (1), 2007, 19–35.
- ^ a b c d e Ronald Cohen 1978 "Ethnicity: Problem and Focus in Anthropology", Annual Review of Anthropology 7: 383 Palo Alto: Stanford University Press
- ^ Joan Vincent 1974, "The Structure of Ethnicity" in Human Organization 33(4): 375-379
- ^ (Smith 1999, p. 12)
- ^ a b Delanty, Gerard & Krishan Kumar (2006) The SAGE Handbook of Nations and Nationalism. SAGE. ISBN 1-41290101-4 p. 171
- ^ a b c d (Smith 1999, p. 13)
- ^ Template:Introduction to Sociology, 7th edition. Anthony Giddens, Mitchell Duneier, Richard Appelbaum, Deborah Carr
- ^ a b Noel, Donald L. (1968). "A Theory of the Origin of Ethnic Stratification". Social Problems 16 (2): 157–172. DOI:10.1525/sp.1968.16.2.03a00030.
- ^ a b c Bobo, Lawrence; Hutchings, Vincent L. (1996). "Perceptions of Racial Group Competition: Extending Blumer's Theory of Group Position to a Multiracial Social Context". American Sociological Review 61 (6): 951–972. DOI:10.2307/2096302.
- ^ (Smith 1999, pp. 4–7)
- ^ Banton, Michael. (2007) "Weber on Ethnic Communities: A critique", Nations and Nationalism 13 (1), 2007, 19–35.
- ^ A. Metraux (1950) "United nations Economic and Security Council Statement by Experts on Problems of Race", American Anthropologist 53(1): 142-145)
- In this regard, distinctions of race have implications rather different from ethnic variations. Racial distinctions, such as "Indian" or "African American", are the outcome of the subjugation of populations in the course of European mercantile expansion. The term "Indian", and later "Native American", stand for the conquered populations of the New World, in disregard of any cultural or physical differences among them. "Negro", and later "African American", similarly serve as a cover term for the culturally and physically variable African, populations that furnished slaves, as well as for the slaves themselves. Indians are conquered people who could be forced to labor or pay tribute; Negroes are "hewers of wood and drawers of water", obtained in violence and put to work under coercion. These two terms thus singled out for primary attention, the historic fact that these populations were made to labor in servitude to support a new class of overlords. Simultaneously, the terms, like "white people", disregard cultural and physical differences within each large category, denying any constituent group political, economic, or ideological identity of its own.
- Racial terms mirror the political process by which populations of whole continents were turned into providers of coerced surplus labor. Under capitalism, these terms did not lose their association with civil disability. They continue to invoke supposed descent from such subjugated populations so as to deny their putative descendants access to upper segments of the labor market. "Indians" and "Negroes" are thus confined to the lower ranks of the industrial army or depressed into the industrial reserve. The function of racial categories within capitalism is exclusionary. They stigmatize groups in order to exclude them from more highly paid jobs and from access to the information needed for their execution. They insulate the more advantaged workers against competition from below, making it difficult for employers to use stigmatized populations as cheaper substitutes or as strikebreakers. Finally, they weaken the ability of such groups to mobilize politically on their own behalf by forcing them back into casual employment and thereby intensifying competition among them for scarce and shifting resources.
- While the categories of race serve primarily to exclude people from all but the lower echelons of the industrial army, ethnic categories express the ways that particular populations came to relate themselves to given segments of the labor market. Such categories emerge from two sources, one external to the group in question, the other internal. As each cohort entered the industrial process, outsiders were able to categorize it in terms of putative provenance and supposed affinity to particular segments of the labor market. At the same time, members of the cohort itself came to value membership in the group thus defined, as a qualification for establishing economic and political claims. Such ethnicities rarely coincided with the initial self-identification of the industrial recruits, who thought of themselves as Hanoverians or Bavarians rather than as Germans, as members of their village or their parish (okiloca) rather than as Poles, as Tonga or Yao rather than "Nyasalanders." The more comprehensive categories emerged only as particular cohorts of workers gained access to different segments of the labor market and began to treat their access as a resource to be defended both socially and politically. Such ethnicities are therefore not "primordial" social relationships. They are historical products of labor market segmentation under the capitalist mode.
- ^ Abizadeh 2001
- ^ Seidner,(1982), Ethnicity, Language, and Power from a Psycholinguistic Perspective, p. 11.
- ^ Maria Rostworowski, "The Incas", Peru Culture
- ^ http://126.96.36.199/search?q=cache:_V2dvVcvWIAJ:whp.uoregon.edu/Lockhart/Intro.pdf+microethnicity&hl=en&ct=clnk&cd=20&gl=uk James Lockhart, Microethnicity in Philological ethnohistory
- ^ Christopher Larkosh, "Je me souviens…aussi: Microethnicity and the Fragility of Memory in French-Canadian New England", TOPIA: Journal for Canadian Cultural Studies, Issue 16 (Toronto, 2006), pp. 91-108
- ^ Gellner 2006 Nations and Nationalism Blackwell Publishing
- ^ Anderson 2006 Imagined Communities Verson
- ^ Walter Pohl, "Conceptions of Ethnicity in Early Medieval Studies", Debating the Middle Ages: Issues and Readings, ed. Lester K. Little and Barbara H. Rosenwein, (Blackwell), 1998, pp 13-24, notes that historians have projected the nineteenth-century conceptions of the nation-state backwards in time, employing biological metaphors of birth and growth: "that the peoples in the Migration Period had little to do with those heroic (or sometimes brutish) clichés is now generally accepted among historians," he remarked. Early medieval peoples were far less homogeneous than often thought, and Pohl follows Reinhard Wenskus, Stammesbildung und Verfassung. (Cologne and Graz) 1961, whose researches into the "ethnogenesis" of the German peoples convinced him that the idea of common origin, as expressed by Isidore of Seville Gens est multitudo ab uno principio orta ("a people is a multitude stemming from one origin") which continues in the original Etymologiae IX.2.i) "sive ab alia natione secundum propriam collectionem distincta ("or distinguished from another people by its proper ties") was a myth.
- ^ Aihway Ong 1996 "Cultural Citizenship in the Making" in Current Anthropology 37(5)
- ^ Encyclopedia of Public Health, by The Gale Group, Inc
- ^ Henry Louis Gates, Jr., In Search of Our Roots: How 19 Extraordinary African Americans Reclaimed Their Past, New York: Crown Publishers, 2009, pp. 20-21
- ^ (French) article 8 de la loi Informatique et libertés, 1978: "Il est interdit de collecter ou de traiter des données à caractère personnel qui font apparaître, directement ou indirectement, les origines raciales ou ethniques, les opinions politiques, philosophiques ou religieuses ou l'appartenance syndicale des personnes, ou qui sont relatives à la santé ou à la vie sexuelle de celles-ci."
- Abizadeh, Arash, "Ethnicity, Race, and a Possible Humanity" World Order, 33.1 (2001): 23-34. (Article that explores the social construction of ethnicity and race.)
- Barth, Fredrik. Ethnic groups and boundaries. The social organization of culture difference, Oslo: Universitetsforlaget, 1969
- Billinger, Michael S. (2007), "Another Look at Ethnicity as a Biological Concept: Moving Anthropology Beyond the Race Concept", Critique of Anthropology 27,1:5–35.
- Cole, C.L. "Nike’s America/ America’s Michael Jordan", Michael Jordan, Inc.: Corporate Sport, Media Culture, and Late Modern America. (New York: Suny Press, 2001).
- Dünnhaupt, Gerhard, "The Bewildering German Boundaries", in: Festschrift for P. M. Mitchell (Heidelberg: Winter 1989).
- Eysenck, H.J., Race, Education and Intelligence (London: Temple Smith, 1971) (ISBN 0-8511-7009-9)
- Hartmann, Douglas. "Notes on Midnight Basketball and the Cultural Politics of Recreation, Race and At-Risk Urban Youth", Journal of Sport and Social Issues. 25 (2001): 339-366.
- Hobsbawm, Eric, and Terence Ranger, editors, The Invention of Tradition. (Cambridge: Cambridge University Press, 1983).
- Thomas Hylland Eriksen (1993) Ethnicity and Nationalism: Anthropological Perspectives, London: Pluto Press
- Levinson, David, Ethnic Groups Worldwide: A Ready Reference Handbook, Greenwood Publishing Group (1998), ISBN 9781573560191.
- Morales-Díaz, Enrique; Gabriel Aquino; & Michael Sletcher, "Ethnicity", in Michael Sletcher, ed., New England, (Westport, CT, 2004).
- Omni, Michael and Howard Winant. Racial Formation in the United States from the 1960s to the 1980s. (New York: Routledge and Kegan Paul, Inc., 1986).
- Seidner, Stanley S. Ethnicity, Language, and Power from a Psycholinguistic Perspective. (Bruxelles: Centre de recherche sur le pluralinguisme1982).
- Sider, Gerald, Lumbee Indian Histories (Cambridge: Cambridge University Press, 1993).
- Smith, Anthony D. (1987), The Ethnic Origins of Nations, Blackwell
- Smith, Anthony D. (1999), Myths and memories of the Nation, Oxford University Press
- ^ U.S. Census Bureau State & County QuickFacts: Race.
- Ethnicity at the Open Directory Project
- Downloadable article: "Evidence that a West-East admixed population lived in the Tarim Basin as early as the early Bronze Age" Li et al. BMC Biology 2010, 8:15.
|This page uses content from the English language Wikipedia. The original content was at Ethnic group. The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.| |
Economics, the study of how societies allocate resources, make choices, and create wealth, is a subject that holds great significance in our daily lives. Teaching economics to kids through online coaching classes provides them with essential knowledge and skills to navigate the complexities of the modern world. This article explore the reasons why learning economics is important for children. And how online coaching classes can foster their understanding of this vital subject.
Learning economics develops financial literacy, equipping children with the knowledge and skills needed to manage money effectively. Online coaching classes provide a structured curriculum that covers topics such as budgeting, saving, investing. Moreover, it also makes understanding the basics of personal finance. By gaining a solid foundation in economics, children learn how to make informed financial decisions. Additionally they develop responsible money management habits from an early age.
Economics cultivates critical thinking and decision-making skills in children. Through online coaching classes, children are exposed to concepts such as opportunity cost, supply and demand, and scarcity, which help them analyze situations and make rational choices. Economics teaches children to evaluate the benefits and drawbacks of different options, enabling them to make sound decisions in their personal and professional lives.
Understanding the Basics of Trade and Commerce:
Economics introduces children to the fundamental principles of trade and commerce. Online coaching classes provide interactive lessons on technology topics like supply and demand, markets, and international trade. By understanding how businesses operate, the role of consumers, and the importance of competition, children gain a better understanding of the economic systems that drive our global economy.
Understanding the Global Economy:
The study of economics provides children with an understanding of the global economy and its interconnectedness. Online coaching classes introduce children to concepts such as trade, globalization, and economic systems. By comprehending how countries interact economically, children develop a global perspective and become aware of the opportunities and challenges presented by a globalized world.
Entrepreneurship and Innovation:
Economics nurtures an entrepreneurial spirit and encourages innovative thinking in children. Online coaching classes can incorporate lessons on entrepreneurship, teaching children about starting businesses, identifying market opportunities, and managing risks. By exploring the principles of supply and demand, competition, and innovation, children are inspired to think creatively and develop problem-solving skills that are valuable in both business and everyday life.
Economics provides insights into how societies function and the impact of economic decisions on individuals and communities. Online coaching classes can explore topics such as income distribution, poverty, inequality, and public policy. By understanding these concepts, children develop empathy and a sense of social responsibility, empowering them to contribute to the creation of a more equitable and sustainable society.
Learning economics prepares children for future careers by introducing them to various industries and occupations. Online coaching classes can highlight the role of economics in fields such as finance, business, government, and international relations. By gaining a solid understanding of economic principles and how they apply in different contexts, children can make informed career choices and develop the skills and knowledge required for success in their chosen professions.
Understanding the Role of Government:
Economics introduces children to the role of government in the economy. Online coaching classes can cover topics like taxation, public goods, and economic policies. By understanding how government decisions impact the economy, children gain insights into public finance, fiscal responsibility, and the role of government in promoting economic growth and social welfare.
Preparing for Future Careers:
Learning economics prepares children for future careers in various fields. Online coaching classes can highlight the diverse career opportunities in economics, such as finance, business management, public policy, and research. By acquiring a solid foundation in economics, children develop transferable skills, such as critical thinking, data analysis, and problem-solving, that are highly valued in the job market.
Economics education encourages civic engagement and active participation in public affairs. Online coaching classes can discuss the role of government in the economy, public policy issues, and the importance of informed citizenship. By understanding economic concepts, children are better equipped to engage in debates, critically analyze economic proposals, and contribute to the democratic decision-making process.
Learning economics through online learning classes is crucial for children as it develops financial literacy, decision-making skills, and a global perspective. It fosters an entrepreneurial mindset, societal awareness, and prepares children for future careers. By promoting civic engagement and empowering children to understand and navigate the complexities of the economy. Moreover, economics education equips them with the knowledge and skills needed to become responsible and active participants in society. Online coaching classes offer a convenient and interactive platform for children to engage with economics. It enables them to develop a strong foundation in this vital subject. |
Hundreds of kilometres above the Earth’s surface, a vast network of satellites is collecting data to help us better understand the climate system.
Since the launch of the first weather satellite in 1959, Earth observation satellites have proven to be vital tools for climate research. In 1984, the Earth Radiation Budget Satellite provided an early insight into how human activities, such as burning fossil fuels, affect the planet’s radiation balance, and helped discover the hole in the ozone layer. Two decades later, the Orbiting Carbon Observatory mission gave us the first global maps of carbon dioxide concentration around the world (see gallery below).
The latest addition to this network of global satellites is the European Space Agency’s (ESA) Sentinel-3A, launched on Tuesday evening from Russia’s Plesetsk Cosmodrome. The new satellite is designed to help fill in gaps in the data on sea-surface temperature and map the extent and topography of ice, among other purposes.
So, what’s up there in the Earth’s atmosphere? Carbon Brief has catalogued all the satellites currently in operation that are adding to scientists’ understanding of climate change.
A global view
Pulling together data from the World Meteorological Organisation, NASA, NOAA and elsewhere, there are 162 “climate satellites” that are either active or semi-operational. We’ve compiled a list of all these satellites into a spreadsheet below.
Our analysis takes a fairly broad definition of what might be described as a climate satellite. For example, we include both publicly and privately funded satellites that have additional purposes unrelated to climate science, such as communications or surveillance. It is worth noting that data from some of these satellites will be harder to access than others: not all agencies or organisations make their data readily available to the scientific community.
Looking at the interactive visualisation above, the satellites fall into two categories. Those in a geostationary orbit cruise at the very precise altitude of 35,786km, remaining above the same spot on earth. The second group are those with low-earth orbits, circling at a height of around 400-1400km above Earth’s surface.
The different orbits serve different purposes. A geostationary orbit allows satellites to continually monitor a particular region uninterrupted. Satellites in a low earth orbit can directly monitor the climate from their position within or just above the atmosphere, and can provide near-global coverage as they scan over different swathes of ground with each orbit.
The graph below shows a zoomed-in view of the satellites in a lower earth orbit, that have been launched since 2000:
The satellites in the graph are categorised according to their primary area of observation. Land observation satellites tend to be in a lower orbit. Travelling closer to the earth means they can get a more detailed view of the land.
Climate satellites are operated by agencies around the world, from Kazakhstan to Chile. But not every country has the resources to support their own satellite. The graph below shows that large nations such as China, India and the US top the list of countries with the most climate satellites in operation, along with multinational collaborations.
Individually, European countries might appear underrepresented. But the many missions of ESA and the European Organisation for the Exploitation of Meteorological Satellites (EUMETSAT) show that Europe is a key player when it comes to climate satellite research.
Why are satellites so important?
As a vehicle for research, satellites aren’t cheap. The contract to build Sentinel-3A was worth €305m back in 2008, and last week ESA signed a €450m contract for two more satellites in the Sentinel-3 mission.
Carbon Brief asked Dr Simon Keogh, head of the satellite data products and systems group at the Met Office, why satellites are worth the money:
Sentinel-3A, which launched earlier this week, will add to this global coverage, filling in for the Envisat mission which failed in 2012. Dr Keogh explains:
Interactive: How satellites are used to monitor climate change
Infographic: we've catalogued all the satellites currently in operation that are adding to scientists’ understanding of climate change |
Speed of Light
Somewhere in outer space, billions of light years from Earth, the original light associated with the Big Bang of the universe is blazing new ground as it continues moving outward. In stark contrast, another form of electromagnetic radiation originating on the Earth, radio waves from the inaugural live episode of The Lucy Show are broadcasting a premier somewhere in deep space, although greatly reduced in amplitude.
The basic concept behind both events involves the speed of light (and all other forms of electromagnetic radiation), which scientists have thoroughly examined, and is now expressed as a constant value denoted in equations by the symbol c. Not truly a constant, but rather the maximum speed in a vacuum, the speed of light, which is almost 300,000 kilometers per second, can be manipulated by changing media or with quantum interference.
Light traveling in a uniform substance, or medium, propagates in a straight line at a relatively constant speed, unless it is refracted, reflected, diffracted, or perturbed in some other manner. This well-established scientific fact is not a product of the Atomic Age or even the Renaissance, but was originally promoted by the ancient Greek scholar, Euclid, somewhere around 350 BC in his landmark treatise Optica. However, the intensity of light (and other electromagnetic radiation) is inversely proportional to the square of the distance traveled. Thus, after light has traveled twice a given distance, the intensity drops by a factor of four.
When light traveling through the air enters a different medium, such as glass or water, the speed and wavelength of light are reduced (see Figure 2), although the frequency remains unaltered. Light travels at approximately 300,000 kilometers per second in a vacuum, which has a refractive index of 1.0, but it slows down to 225,000 kilometers per second in water (refractive index of 1.3; see Figure 2) and 200,000 kilometers per second in glass (refractive index of 1.5). In diamond, with a rather high refractive index of 2.4, the speed of light is reduced to a relative crawl (125,000 kilometers per second), being about 60 percent less than its maximum speed in a vacuum.
Because of the enormous journeys that light travels in outer space between galaxies (see Figure 1) and within the Milky Way, the expanse between stars is measured not in kilometers, but rather light-years, the distance light would travel in a year. A light-year equals 9.5 trillion kilometers or about 5.9 trillion miles. The distance from Earth to the next nearest star beyond our sun, Proxima Centauri, is approximately 4.24 light-years. By comparison, the Milky Way galaxy is estimated to be about 150,000 light-years in diameter, and the distance to the Andromeda galaxy is approximately 2.21 million light-years. This means that light leaving the Andromeda galaxy 2.21 million years ago is just arriving at Earth, unless it was waylaid by reflecting celestial bodies or refracting debris.
When astronomers gaze into the night skies, they are observing a mixture of real time, the recent past, and ancient history. For example, during the period that pioneering Babylonians, Arab astrologers, and Greek astronomers described the stellar constellations, Scorpius (Scorpio to astrologers) still had the whiptail of a scorpion. The tail star and others in this constellation had appeared as novae in the skies between 500 and 1000 BC, but are no longer visible to today's stargazers. Although some of the stars that are observed in the night skies of Earth have long since perished, the light waves that carry their images are still reaching human eyes and telescopes. In effect, the light from their destruction (and the darkness of their absence) has not yet crossed the enormous distances of deep space because of insufficient time.
Empedocles of Acragas, who lived around 450 BC, was one of the first recorded philosophers to speculate that light traveled with a finite velocity. Almost a millennium later, around 525 AD, Roman scholar and mathematician Anicius Boethius attempted to document the speed of light, but after being accused of treason and sorcery, was decapitated for his scientific endeavors. Since the earliest application of black powder for fireworks and signals by the Chinese, man has wondered about the speed of light. With the flash of light and color preceding the explosive sound by several seconds, it did not require a serious calculation to realize that the speed of light obviously exceeded the speed of sound.
The Chinese secrets behind explosives made their way to the West during the middle of the thirteenth century, and with them, came questions about the speed of light. Prior to this period, other investigators must have considered the flash of lightning followed later by the clap of thunder, typical of a thunderstorm, but offered no plausible scientific explanations about the nature of the delay. The Arabic scholar Alhazen was the first serious optical scientist to suggest (around 1000 AD) that light had a finite speed, and by 1250 AD, British optics pioneer Roger Bacon wrote that the speed of light was finite, although very rapid. Still, the widely held opinion by a majority of scientists during this period was that the speed of light is infinite and could not be measured.
In 1572, the famous Danish astronomer Tycho Brahe was the first to describe a supernova, which occurred in the constellation Cassiopeia. After watching a "new star" suddenly appear in the sky, slowly intensify in brightness, and then fade from view over an 18-month period, the astronomer was mystified, but intrigued. These novel celestial visions drove Brahe and his contemporaries to question the widely held notion of a perfect and unchanging universe having an infinite speed of light. The belief that light has infinite speed was hard to displace, although a few scientists were beginning to question the speed of light in the sixteenth century. As late as 1604, the German physicist Johannes Kepler speculated that the speed of light was instantaneous. He added in his published notes that the vacuum of space did not slow the speed of light down, hampering, to a limited degree, the quest by his contemporaries for the ether that supposedly filled space and carried the light.
Shortly after the invention and some relatively crude refinements to the telescope, Danish astronomer Ole Roemer (in 1676) was the first scientist to make a rigorous attempt to estimate the speed of light. By studying Jupiter's moon Io and its frequent eclipses, Roemer was able to predict the periodicity of an eclipse period for the moon (Figure 3). However, after several months, he noticed that his predictions were slowly becoming less accurate by progressively longer time intervals, reaching a maximum error of about 22 minutes (a rather large discrepancy, considering how far light travels in that time span). Then, just as oddly, his predictions again became more accurate over several months, with the cycle repeating itself. Working at the Paris Observatory, Roemer soon realized that the observed differences were caused by variations in the distance between the Earth and Jupiter, due to orbital pathways of the planets. As Jupiter moved away from the Earth, light had a longer distance to travel, taking additional time to reach the Earth. Applying the relatively inaccurate calculations for the distances between Earth and Jupiter available during the period, Roemer was able to estimate the speed of light at about 137,000 miles (or 220,000 kilometers) per second. Figure 3 illustrates a reproduction of the original drawings by Roemer delineating his methodology utilized to determine the speed of light.
Roemer's work stirred the scientific community, and many investigators began to reconsider their speculations about the infinite speed of light. Sir Isaac Newton, for example, wrote in his landmark 1687 treatise Philosophiae Naturalis Prinicipia Mathematica (Mathematical Principles of Natural Philosophy), "For it is now certain from the phenomena of Jupiter's satellites, confirmed by the observations of different astronomers, that light is propagated in succession and requires about seven or eight minutes to travel from the sun to the earth", which is actually a remarkably close estimate for the correct speed of light. Newton's respected opinion and widespread reputation was instrumental in jump-starting the Scientific Revolution, and helped launch new research by scientists who now endorsed light's speed as finite.
The next in line to provide a useful estimate of the speed of light was the British physicist James Bradley. In 1728, a year after Newton's death, Bradley estimated the speed of light in a vacuum to be approximately 301,000 kilometers per second, using stellar aberrations. These phenomena are manifested by an apparent variation in the position of stars due to the motion of the Earth around the sun. The degree of stellar aberration can be determined from the ratio of the Earth's orbital speed to the speed of light. By measuring the stellar aberration angle and applying that data to the orbital speed of the Earth, Bradley was able to arrive at a remarkably accurate estimate.
In 1834, Sir Charles Wheatstone, inventor of the kaleidoscope and a pioneer in the science of sound, attempted to measure the speed of electricity. Wheatstone invented a device that utilized rotating mirrors and capacitative discharge through a Leyden jar to generate and clock the movement of sparks through almost eight miles of wire. Unfortunately, his calculations (and perhaps his instrumentation) were in error to such a degree that Wheatstone estimated the velocity of electricity at 288,000 miles per second, a mistake that led him to believe that electricity traveled faster than light. Wheatstone's research was later expanded upon by French scientist Dominique François Jean Arago. Although he failed to complete his work before his eyesight failed in 1850, Arago correctly postulated that light traveled slower in water than air.
Meanwhile in France, rival scientists Armand Fizeau and Jean-Bernard-Leon Foucault independently attempted to measure the speed of light, without relying on celestial events, by taking advantage of Arago's discoveries and expanding on Wheatstone's rotating mirror instrument design. In 1849, Fizeau engineered a device that flashed a light beam through a toothed wheel (instead of a rotating mirror), and then onto a fixed mirror positioned at a distance of 5.5 miles away. By rotating the wheel at a rapid rate, he was able to steer the beam through a gap between two of the teeth on the outward journey and catch reflected rays in the neighboring gap on the way back. Armed with the wheel speed and distance traveled by the pulsed light, Fizeau was able to calculate the speed of light. He also discovered that light travels faster in air than in water (confirming Arago's hypothesis), a fact that fellow countryman Foucault later confirmed through experimentation.
Foucault employed a rapidly rotating mirror driven by a compressed air turbine to measure the speed of light. In his apparatus (see Figure 4), a narrow beam of light is passed through an aperture and then through a glass window (acting also as a beamsplitter) with a finely graduated scale before impacting on the rapidly spinning mirror. Light reflected from the spinning mirror is directed through a battery of stationary mirrors in a zigzag pattern designed to increase the path length of the instrument to about 20 meters without a corresponding increase in size. In the amount of time it took the light to reflect through the series of mirrors and return to the rotating mirror, a slight shift in the mirror position had occurred. Subsequently, light reflected from the shifted position of the spinning mirror follows a new pathway back to the source and into a microscope mounted on the instrument. The tiny shift in light could be seen through the microscope and recorded. By analysis of the data collected from his experiment, Foucault was able to calculate the speed of light as 298,000 kilometers per second (approximately 185,000 miles per second).
The light path in Foucault's device was short enough to be utilized in the measurement of light speeds through media other than air. He discovered that the speed of light in water or glass was only about two-thirds of the value in air, and he also concluded that the speed of light through a given medium is inversely proportional to the refractive index. This remarkable result is consistent with the predictions about light behavior developed hundreds of years earlier from the wave theory of light propagation.
Following Foucault's lead, a Polish-born American physicist named Albert A. Michelson attempted to increase the accuracy of the method, and successfully measured the speed of light in 1878 with a more sophisticated version of the apparatus along a 2,000-foot wall lining the banks of Maryland's Severn River. Investing in high quality lenses and mirrors to focus and reflect a beam of light over a much longer pathway than the one utilized by Foucault, Michelson calculated a final result of 186,355 miles per second (299,909 kilometers per second), allowing for a possible error of about 30 miles per second. Due to the increased sophistication of his experimental design, the accuracy of Michelson's measurement was over 20 times greater than Foucault's.
During the late 1800s it was still believed by most scientists that light propagates through space utilizing a carrier medium termed the ether. Michelson teamed with scientist Edward Morley in 1887 to devise an experimental method for detecting the ether by observing relative changes in the speed of light as the Earth completed its orbit around the sun. In order to accomplish this goal, they designed an interferometer that splits a beam of light and re-directs the individual beams through two different pathways, each over 10 meters in length, using a complex array of mirrors. Michelson and Morley reasoned that if the Earth is traveling through an ether medium, the beam reflecting back and forth perpendicular to the flow of ether would have to travel farther than the beam reflecting parallel to the ether. The result would be a delay in one of the light beams that could be detected when the beams were recombined through interference.
The experimental apparatus built by Michelson and Morley was massive (see Figure 5). Mounted on a slowly rotating stone slab that was over five feet square and 14 inches thick, the instrument was further protected by an underlying pool of mercury that acted as a frictionless shock absorber to remove vibrations from the Earth. Once the slab was set into motion, achieving a top speed of 10 revolutions per hour, it took hours to reach a halt again. Light passing through a beamsplitter, and reflected by the mirror system, was examined with a microscope for interference fringes, but none were ever observed. However, Michelson utilized his interferometer to accurately determine the speed of light at 186,320 miles per second (299,853 kilometers per second), a value that stood as the standard for the next 25 years. The failure to detect a change in the speed of light by the Michelson-Morley experiment set in motion the beginnings of an end to the ether controversy, which was finally laid to rest by the theories of Albert Einstein in the early twentieth century.
In 1905, Einstein published his Special Theory of Relativity followed by the General Theory of Relativity in 1915. The first theory related to the movement of objects at constant velocity relative to one another, while the second focused on acceleration and its links with gravity. Because they challenged many long-standing hypotheses, such as Isaac Newton's law of motion, Einstein's theories were a revolutionary force in physics. The idea of relativity embodies the concept that the velocity of an object can be determined only relative to the position of the observer. As an example, a man walking inside an airliner appears to be traveling at about one mile per hour in the reference frame of the aircraft (which itself is moving at 600 miles per hour). However, to an observer on the ground, the man seems to be moving at 601 miles per hour.
Einstein assumed in his calculations that the speed of light traveling between two frames of reference remains the same for observers in both locations. Because an observer in one frame uses light to determine the position and velocity of objects in another frame, this changes the manner in which the observer can relate the position and velocity of the objects. Einstein employed this concept to derive several important formulas describing how objects in one frame of reference appear when viewed from another that is in uniform motion relative to the first. His results led to some unusual conclusions, although the effects only become noticeable when the relative velocity of an object approaches the speed of light. In summary, the major implications of Einstein's fundamental theories and his often-referenced relativity equation:
can be summarized as follows:
Although Einstein's theory affected the entire world of physics, it had particularly important implications for those scientists who were studying light. The theory explained why the Michelson-Morley experiment failed to produce the expected results, discouraging further serious scientific investigations into the nature of ether as a carrier medium. It also demonstrated that nothing can move faster than the speed of light in a vacuum, and that this speed is a constant and unchanging value. Meanwhile, experimental scientists continued to apply increasingly sophisticated instruments to zero in on a correct value for the speed of light and reduce the error in its measurement.
Measurements of the Speed of Light
During the late nineteenth century, advances in radio and microwave technology provided novel approaches for measuring the speed of light. In 1888, more than 200 years after Roemer's pioneering celestial observations, German physicist Heinrich Rudolf Hertz measured the speed of radio waves. Hertz arrived at a value near 300,000 kilometers per second, confirming James Clerk Maxwell's theory that radio waves and light were both forms of electromagnetic radiation. Additional proof was gathered during the 1940s and 1950s, when British physicists Keith Davy Froome and Louis Essen employed radio and microwaves, respectively, to more precisely measure the speed of electromagnetic radiation.
Maxwell is also credited with defining the speed of light and other forms of electromagnetic radiation, not through measurement, but by mathematical deduction. During his research attempts to find a link between electricity and magnetism, Maxwell theorized that a changing electrical field produces a magnetic field, the reverse corollary of Faraday's law. He proposed that electromagnetic waves are composed of combined oscillating electric and magnetic waves, and calculated the velocity of these waves through space as:
where e is the permitivity and m is the permeability of free space, two constants that can be measured with a relatively high degree of accuracy. The result is a value that closely approximates the measured speed of light.
In 1891, continuing his studies on the speed of light and astronomy, Michelson created a large-scale interferometer using the refracting telescope at the Lick Observatory in California. His observations were based on the delay in the arrival time of light when viewing distant objects, such as stars, which can be quantitatively analyzed to measure both the size of celestial bodies and the speed of light. Almost 30 years later, Michelson moved his experiments to the Mount Wilson Observatory, and applied the same techniques to the 100-inch telescope, the world's largest at the time.
By incorporating an octagonal rotating mirror into his experimental design, Michelson arrived at a value of 299,845 kilometers per second for the speed of light. Although Michelson died before completing his experiments, his co-worker at Mount Wilson, Francis G. Pease, continued to employ the innovative technique to conduct research into the 1930s. Using a modified interferometer, Pease made numerous measurements over several years and finally determined that the correct value for the speed of light is 299,774 kilometers per second, the closest measurement achieved to that date. Several years later, in 1941, the scientific community set a standard for the speed of light. This value, 299,773 kilometers per second, was based on a compilation from the most accurate measurements of the period. Figure 6 presents a graphical representation of light speed measurements over the past 200 years.
By the late 1960s, lasers were becoming stable research tools with highly defined frequencies and wavelengths. It quickly became obvious that a simultaneous measurement of frequency and wavelength would yield a very accurate value for the speed of light, similar to an experimental approach carried out by Keith Davy Froome using microwaves in 1958. Several research groups in the United States and in other countries measured the frequency of the 633-nanometer line from an iodine-stabilized helium-neon laser and obtained highly accurate results. In 1972, the National Institute of Standards and Technology employed the laser technology to measure the speed at 299,792,458 meters per second (186,282 miles per second), which ultimately resulted in the redefinition of the meter through a highly accurate estimate for the speed of light.
Starting with Roemer's 1676 breakthrough endeavors, the speed of light has been measured at least 163 times utilizing a wide variety of different techniques by more than 100 investigators (see Table 1 for a compilation of methods, investigators, and dates). As scientific methods and devices were refined, the error limits of the estimates narrowed, although the speed of light has not significantly changed since Roemer's seventeenth century calculations. Finally in 1983, more than 300 years after the first serious measurement attempt, the speed of light was defined as being 299,792.458 kilometers per second by the Seventeenth General Congress on Weights and Measures. Thus, the meter is defined as the distance light travels through a vacuum during a time interval of 1/299,792,458 seconds. In general, however, (even in many scientific calculations) the speed of light is rounded to 300,000 kilometers (or 186,000 miles) per second. Arriving at a standard value for the speed of light was important for establishing an international system of units that would enable scientists from around the world to compare their data and calculations.
There is a mild controversy over whether evidence exists that the speed of light has been slowing since the time of the Big Bang, when it may have moved significantly faster, as suggested by some investigators. Although arguments presented and countered perpetuate this debate, most scientists still contend that the speed of light is a constant. Physicists point out that the actual speed of light as measured by Roemer and his followers has not significantly changed, but rather point to a series of refinements in scientific instrumentation associated with increases in precision of the measurements utilized to establish the speed of light. Today, the distance between Jupiter and the Earth is known with a high degree of accuracy, as are the diameter of the solar system and the orbital trajectories of the planets. When researchers apply this data to rework the calculations made over the past few centuries, they derive values for the speed of light comparable to those obtained with more modern and sophisticated instrumentation.
Kenneth R. Spring - Scientific Consultant, Lusby, Maryland, 20657.
Thomas J. Fellers, Lawrence D. Zuckerman, and Michael W. Davidson - National High Magnetic Field Laboratory, 1800 East Paul Dirac Dr., The Florida State University, Tallahassee, Florida, 32310.
Questions or comments? Send us an email.
© 1998-2022 by Michael W. Davidson and The Florida State University. All Rights Reserved. No images, graphics, scripts, or applets may be reproduced or used in any manner without permission from the copyright holders. Use of this website means you agree to all of the Legal Terms and Conditions set forth by the owners.
This website is maintained by our |
Presentation on theme: "Differing Speed Scenarios AP Calculus AB February 2007."— Presentation transcript:
Differing Speed Scenarios AP Calculus AB February 2007
Acknowledgement Mr Bob Dixon, Physics teacher extraordinaire, was essential to this project. He not only provided all the equipment; he also taught us how to use VideoPoint software.
Team 1 Positive Acceleration, Positive Velocity By Emma And Mrs. Clemens
Preparation First we obtained three meter sticks and set them out in a straight line to form our axis that the craft travels upon. We then used Mr. Dixon’s hovercraft as our vehicle moving along the axis. In my team’s demonstration, we started our craft at -3 and then floored the controls to the end of our meter sticks to demonstrate positive velocity and acceleration.
Position The craft started at -3 meters, and as time went on it moved in the positive direction because of its positive velocity. It eventually reaches the origin. The graph is parabolic because velocity is increasing because of the craft’s positive acceleration.
Positive Velocity The craft started at rest, so velocity started at 0 m/sec. Then as time went on, velocity increased because of positive acceleration. If it was constant velocity, it would be a horizontal line, but the positive slope means acceleration factors in.
Positive Acceleration Acceleration is constant at around m/sec 2. (horizontal pink line). The acceleration actually fluctuates a bit, but overall it stays stays positive, and causes the craft to speed up as time goes on.
Speed The speed of the craft starts at 0 m/sec because the hovercraft was at rest. As velocity increases, speed increases until the craft is traveling at about 2.5 m/sec.
Summary Our hover craft started at rest at a position of -3 meters. It then positively accelerated with positive velocity up to and beyond the origin at 0. Velocity continued to increase as time went on, and acceleration fluctuated slightly between m/sec 2 and m/sec 2, although its major trend is constant. Speed is the same as velocity, since the velocity was always positive.
Positive Velocity and Negative Acceleration Team 2 Levi and David By Dave n’ Levi
Set Up First, we obtained a hovercraft. It modeled frictionless motion and allowed us to obtain results for position, velocity, and acceleration. We used meter sticks to scale our hovercraft’s motion.
Using our data points on the graph of f, we see that f is increasing. However, we also see that the slope of f is decreasing. This means that although our hovercraft was moving forward, its velocity was decreasing, meaning that it was moving forward at a slower and slower rate. (f(2)-f(1))/2-1 =.344(f(21)-f(20))/21-20 =.092 This shows that for values early in the interval, the average velocity is lesser than the average velocity later in the interval. This shows negative acceleration.
Using our data points for the graph of f ', we see that the velocity of the hovercraft is positive, yet decreasing. This means that our hovercraft was getting slower while moving forward. The data table shows that the velocity values range from m/sec to.8112 m/sec. Both of these values are positive, meaning that the hovercraft was moving forward, yet the hovercraft was clearly moving at a faster rate in the beginning. In this particular case the speed function is equivalent to the velocity function because the velocity in this example is always positive and speed is the absolute value of velocity.
Using the table values for f '', we see that every value of f '' is negative. The graph also shows this. Because f '' is always negative, this accounts for the hovercraft's "slowing down." A better model for a(t) would be a constant function, a(t) = according to the velocity function.
Summary of Data Our hovercraft experienced positive velocity and negative acceleration. This means that although our craft was always moving forward, it was slowing down. Our velocity was positive and decreasing, and our acceleration was (with some variations) a negative constant.
Visualizing Speed (Team 3) By Ryan & Stacy
Set-Up We set up meter sticks to give a scale for the experiment. Used a hovercraft as the object undergoing the force changes to minimize friction. Videotaped the experiments to have a record. As Team 3, we will be showing you negative starting velocity combined with positive acceleration.
Negative Velocity + Positive Acceleration In Action
Position As Team 3 we started our hovercraft at 0.5 m on the “positive” side of the origin which was set at two meters to the right of the three meter sticks (basically we started a little to the left of the end) The hovercraft was heading in the negative direction because of its negative starting velocity. But went slower as time went on because acceleration was pulling it in a positive direction (was going backwards but was slowing down).
Velocity Velocity is calculated by dividing the distance traveled by the time it took to travel that distance. We started with a negative velocity which then became more negative because of human error (we should have clipped the video to start around 6 secs for only positive acceleration to be shown). Positive acceleration then began to slow the craft down The positive acceleration gradually made the velocity less and less negative as time went on. At the end you can see the velocity became slightly positive because of the continuation of the positive acceleration.
Acceleration As the derivative of velocity (a parabola), the graph is in a straight line. If we had clipped the video correctly, it would be horizontal at a positive y-value. When acceleration becomes positive the craft slows down.
Speed Speed is the absolute value of velocity so speed’s value is always positive as long as the object is moving. Our craft’s speed was highest in the beginning because the positive acceleration began slowing the craft down. Our speed would then begin increasing because the acceleration would begin to move the craft in a positive direction.
AP Calc Video Project Group 4: Catherine and Maria
The Data The x-values are negative as time increases, with the y- values staying approximately the same – which makes sense, as the hovercraft in the video goes from right to left and not up or down. The x-values are becoming increasing farther apart, at an increasing rate, showing that the craft is becoming more negative in both velocity and acceleration.
The Graphs: Position vs Time This graph shows position as a function of time. The hovercraft in the video begins around -1.5 m and moves away from the origin at an increasing rate, so it make sense that the graph shows the position as becoming more negative more quickly as time goes on. This parabola can be estimated by s(t)=-.44t t , which judging from the r 2 value of 1, is incredibly close to what happens on the video.
The Graphs: Velocity vs Time This graph shows the velocity as a function of time, showing it starting at rest and becoming progressively more negative. In the video, the hovercraft starts at rest and become “faster,” but is going in a negative direction. Note that this, a line, is the derivative of position, a parabola. The function is estimated by v(t) = -.908t+.019, which is very close to s’(t) = t
The Graphs: Speed vs Time This approximate graph shows the speed of the hovercraft as a function of time. Note that it is the absolute value of the next graph, velocity. In the video, the hovercraft starts at rest and gets increasingly faster, as is shown by this graph.
The Graphs: Acceleration vs Time This graph shows acceleration as a function of time. Note that it, a generally constant line, is the derivative of the velocity graph, a line. In the video, the hovercraft accelerates at approximately the same rate, as shown by this graph. The acceleration is roughly -1 m/s 2.
Summary The hovercraft begins at rest, at x = m, and then experiences negative velocity and negative acceleration as it “speeds up” in a negative direction over two seconds. It ends up at x = m, with an almost constant y-value around y = 0.12 m. The acceleration is nearly constant as well, around -1 m/s 2, with s(t)=-.44t t , v(t)= t+.019, around -t, and speed as a function of time equal to |-0.908t+.019|. This can be seen in the table as well, as the values are becoming more negative and increasingly farther apart from adjacent points.
Overall Summary Speed depends on the signs of both velocity and acceleration: Going faster to the right: positive velocity and positive acceleration Slowing down, but moving right: positive velocity and negative acceleration Going faster to the left: negative velocity and negative acceleration Slowing down but moving left: negative velocity and positive acceleration Conclusion object is going faster when signs of velocity and acceleration are the same and slowing down when signs of velocity and acceleration are opposite. |
There are five main theories of “truth”, these are: the correspondence theory, the coherence theory, and the pragmatic, redundancy and semantic theories. All these theories are concerned with the truth and falsity of what people say or think.
The correspondence theory of truth states that the falsity or truth of a statement can only be judged in its relationship to the world and whether it actually describes the world accurately; therefore true statements correspond to the actual state of affairs. This model is a traditional way of thinking and can be linked back to some of the Greek Philosophers such as Aristotle, Socrates and Plato. This theory can be broken down into two sections; on the first hand this theory tries to conjecture a relationship between thoughts or statements and on the other hand things or facts. As Aristotle stated in his Metaphysics:
“To say that (either) that which is is not or that which is not is, is a falsehood; and to say that that which is is and that which is not is not, is true”(Aristole ())
The correspondence theory can be split into two main categories the first being correspondence as congruence. Correspondence as congruence claims that for a statement to be true must have a structural isomorphism(2) that is directly linked to a state of affairs in the world that makes it true(3).
This can be best demonstrated in Russell’s “Theory of Judgment” in which he proposed that belief cannot be a binary relation between the believer and fact, as one could not have false beliefs. As an alternative, Russell construed belief as a multigrade relation between the believer and the objects in belief. For example:
“Othello believes that Desdemona loves Cassio”
This statement can be seen as “true” in the eye’s of Russell as the object of the belief are related as they are judged to be related and if Desdemona does love Cassio. However one of the main criticisms of Russell is that is it impossible to hold a false belief about non-existents, although it is obvious that there are such false beliefs, for example, a child believing Santa Claus has a white beard, however the sentence itself would be said to be false as there is no such thing as Santa. Richard Kirkham (1992) states, in relation to this, that the theory of descriptions can be applied to sentences but not beliefs as it is impossible to judge non-existents on Russell’s theory. There is a huge pothole in this theory as some sentences can pose difficult for this model: a “small cheque” is a kind of cheque but a “counterfeit cheque” may not be in Russell’s case as adjectives such as “counterfeit” lose their simple meaning. This caused Russell to abandon his theory and develop a new theory of judgment in 1919.
Correspondence as correlation is the second half of the correspondence theory and was developed by John Austin. Austin theorised that there does not necessarily need to be a relationship between a true statement and the state of affairs that makes it true as he tried to prove that the value of truth was only a small part in the rage of utterances. Austin heavily disagreed with the presumption that utterances always have to “constate” or “describe” the subject in turn making them true or false and thus Austin introduced “performance” sentences(4).
Performance utterances are not true or false that is not truth-evaluable(5) instead they can be said to be “happy” or “unhappy”(J.S. Andersson (1975)). Uttering such performatives can be said to be doing a certain type of illocutionary action. This to Austin would not just be describe as:
“…just saying or describing something”(J.L. Austin (1962))
Austin gives an example of a performance utterance:
“I bet you six pence it will rain tomorrow”(J.L. Austin (1962))
In making this utterance you are obligating a promise, you are not just simply stating what you are doing. However if, for example, you do not keep your promise and offer the sixpence if it rains although this is not in order with the utterance the sentence is not false it can just be said to be “happy” or “unhappy”, however this also demonstrates how the sentence can never be true. However, J.R. Searle argues that performatives are in fact true or false and says performatives are what we would otherwise call declarations and is a technical notion of Searle’s account:
“…the successful performance of the speech act is sufficient to bring about the fit between words and world, to make the propositional content true.” (J.R. Searle(1989)).
Bach and Harnish (1991) agree with Searle in saying that performances can be true of false, however for different reasons. They believed that these performances are directly statements not declarations. On the other hand Bach and Harnish attack Searle stating that ordinary performances do not need rationalisation, because they are an ordinary and successful way of communicating when the audience can infer your communicative intention. This contrasts Searle’s view point as he states performances are “declarations” as declarations are only “accidently communicative” and are only really successful if they fulfil the conventions. Bach and Harnish finally argued that even though communicative success relies on the agreement that they are statements the performative force of perfortatives does not.
B. The Coherence Theory
The coherence theory differs to the correspondence theory for two main reasons the first being that the competing theories give different meaning to the proposition and their truth condition. According to the coherence theory the relationship is that of coherence. There are several versions of the coherence theory of which differ on two major parts. The different versions of the theory give different accounts of the coherence relation.
In accordance to some early versions of the theory the coherence can simply be put as consistency; therefore to say that the propositions join together to a specific set of propositions is to say that the propositions are consist ant to that set. This version can be deemed unsatisfactory for the following reason: consider two propositions that belong to different sets surely these propositions could both be consistent with a specific set whilst simultaneously being inconsistent with each other. The second and more credible version of the coherence theory offers that coherence is some form of entailment. In accordance with this version a proposition coheres with a set of propositions if and only if it is entailed by many sets.
There are two principle lines of arguments that have led philosophers to adopt a coherence theory of truth. Early advocates were convinced by the focus on metaphysical questions, lately there has been attention paid to the epistemological and semantic basis of coherence. The earliest versions of coherence were associated with the idea of idealism. The coherence theory was adopted by a number of British idealists in the latter years of the nineteenth century. For example, F.H Bradley (1914).
It can be said that idealists are lead to the coherence theory because of their metaphysical position. Advocates of the correspondence theory believe that statements and beliefs are ontologically different from the objective setting which makes the said belief true. Idealists on the other hand do not believe that there is an ontological standpoint between beliefs and what makes these beliefs true. From this idealist point of view reality is simple a collection of beliefs. Accordingly, a belief cannot be accurate or true because it corresponds to something that is not a belief. As an alternative the truth of a belief can only be validated if consistent with its coherence with other held beliefs, therefore a belief that come from an idealism perspective comes in degrees. A belief is true to the degree that it coheres with others.
Bearing this in mind it has been stated by Candlish (2006) that F. H. Bradley described an identity theory not a coherence theory.
There is another route to adopt when looking at coherence theory, one of an epistemological route. Blanshard (1939) argued that:
“… a coherence theory of justification leads to a coherence theory of truth.”
His argument is as follows: Someone might believe that coherence with a set of beliefs is a test to seek the truth but that the truth is comprised of a correspondence theory of objective facts. Never the less, if truth consists in correspondence to facts, coherence with a set of beliefs will not suffice to test for the truth. This can be said to be the case as there is no concrete guarantee that a succinct coherent set of beliefs is a foolproof test for the truth. If coherence is simply a good but weak test for the truth, then the argument fails (Rescher 1973). There is a “falling apart” of truth, as Blanshard suggests, if it can be only seen as a fallible test.
Another view point is that we cannot “get outside” or “escape” our own beliefs making it complicated to compare statements to objective facts. There is evidence of a version of this argument adopted by logical positivists such as Neurath (1983) and Hempel (1935). This argument is similar to Blanshard’s in which it depends of the coherence theory for justification. This line of argument infers that we will never know if a proposition corresponds to reality.
This argument is scrutinised by two main criticisms. Firstly, it relies on the coherence theory for validation and therefore susceptible to any objections to this theory. Secondly, a coherence theory does not always follow these premisses. We cannot imply that a proposition that cannot be know to comply with reality does not comply with reality. Even if correspondence theorists agree that we can only know the propositions which fall in line with our beliefs, they can still believe that truth is held within our correspondence; if so then it must be accepted that there a truths which cannot be known. Otherwise, it can be said, that the coherence of a statement with a set of fixed beliefs is is a valid indicator that the statement corresponds to objective facts and we can safely know that propositions correspond. This was the viewpoint of Davidson (1920)
It is felt that coherence theorists need to justify that propositions cannot correspond to objective facts, not just that they cannot be known to. As noted, the coherence and correspondence theories have different view about the conditions of truth. One way to help decide which of these accounts is correct is to be aware of the procedure by which propositions are assigned truth conditions.
Finally Coherentists can dispute that the lone condition that the speaker can justify his or her own propositions is only in relation to his or her beliefs (Young (1995)).
There are many criticisms of the coherence theory of truth; however there are two that will be focused on: the specification objection and the transcendence objection.
The specific objection states that coherence theorists have no possible method to identify a set of propositions without contradicting their own. This argument can be first seen in Russell (1907).
However there are other uses of “truth” and the word “true”, for example, we speak of a true friend however this is often set aside, perhaps derivative but at any rate different. Many views are held about how the content of what we say and think should be specified thus leading us to be concerned with what the bearers of truth are; for Wittgenstein the world consisted of facts. Human beings are made a wear of facts by virtue of our mental representation and thoughts. These thoughts are expressed in propositions, whose form indicates the position of these facts in reality. Everything that is true, that is, all the facts that constitute to the world and which (in principle) can be expressed by atomic sentences.
Tautological expression occupy a special role in this language framework because they are true under all conditions, however tautologies are literally nonsense as they convey nothing about what the facts truly are. Despite this, since they are true under all conditions, tautologies provide the underlying structure of all language; this being thought and reality. Fitting with the ideas in Wittgenstein’s writings, Tractatus (6.1), that the most scientific, logical features of the world are not themselves additional fact about it.
Much like beauty propositions are entirely devoid of value. Facts are just facts; everything else that gives the world meaning must reside elsewhere. Wittgenstein was trying to achieve a properly logical language; therefore only dealing with what is true. Aesthetics judgements about what is beautiful and ethical judgments about what is good simply cannot be expressed within logical language, since they transcend what can be pictured in thought. This can be seen as a major problem as this would leave all the major questions in traditional philosophy not only unanswered but also un-askable. It is therefore not unfair to conclude that the Tractatus itself is nothing more than useful nonsense.
“Whereof one cannot speak, thereof one must be silent.”
This stark and lone statement renders literally all of human life unspeakable. It was this carefully delineated sense of what logical language can properly express which influenced the ideas of Logical Positivism. Wittgenstein proposed himself that there was nothing left of philosophers to do which is reflected in his abandoment of the discipline for nearly a decade.
The problem with Wittgenstein’s logical analysis is that it demands too much precision, both in the definitions of words and in the representations of their logical structure. In ordinary language, applications of a word often only bear a “family resemblance” to one another; also there are many grammatical forms of expressing the same basic thought. However, under these conditions. |
History of Virginia
|History of Virginia|
The History of Virginia begins with documentation by the first Spanish explorers to reach the area in the 1500s, when it was occupied chiefly by Algonquian, Iroquoian, and Siouan peoples. After a failed English attempt to settle Virginia in the 1580s by Sir Walter Raleigh, permanent European settlement began in Virginia with Jamestown in 1607. The colony was a commercial venture sponsored by London businessmen, who sent individual men to Virginia to look for gold. They did not send families. There was no gold, and the colonists could barely feed themselves. The colony nearly failed until tobacco emerged as a profitable export. It was grown on plantations, using primarily indentured servants for the intensive hand labor involved.. After 1662, the colony turned black slavery into a hereditary racial caste. By 1750, the primary cultivators of the cash crop were West African slaves. While the plantations thrived because of the high demand for tobacco, most white settlers raised their families on subsistence farms. Warfare with the Virginia Indian nations had been a factor in the 17th century; after 1700 there was continued conflict with natives east of the Alleghenies, especially in the French and Indian War (1754-1763), when the tribes were allied with the French. The westernmost counties including Wise and Washington only became safe with the death of Bob Benge in 1794.
The Virginia Colony became the wealthiest and most populated British colony in North America, with an elected General Assembly. The colony was dominated by rich planters who were also in control of the established Anglican Church. Baptist and Methodist preachers brought the Great Awakening, welcoming black members and leading to many evangelical and racially integrated churches. Virginia planters had a major role in gaining independence and in the development of democratic-republican ideals of the United States. They were important in the Declaration of Independence, writing the Constitutional Convention (and preserving protection for the slave trade), and establishing the Bill of Rights. The state of Kentucky separated from Virginia in 1792. Four of the first five presidents were Virginians: George Washington, the "Father of his country"; and after 1800, "The Virginia Dynasty" of presidents for 24 years: Thomas Jefferson, James Madison, and James Monroe.
During the first half of the 19th century, tobacco prices declined and tobacco lands lost much of their fertility. Planters adopted mixed farming, with an emphasis on wheat and livestock, which required less labor. The Constitutions of 1830 and 1850 expanded suffrage but did not equalize white male apportionment statewide. The population grew slowly from 700,000 in 1790, to 1 million in 1830, to 1.2 million in 1860. Virginia was the largest state joining the Confederate States of America in 1861. It became the major theater of war in the American Civil War. Unionists in western Virginia created the separate state of West Virginia. Virginia's economy was devastated in the war and disrupted in Reconstruction, when it was administered as Military District Number One. The first signs of recovery were seen in tobacco cultivation and the related cigarette industry, followed by coal mining and increasing industrialization. In 1883 conservative white Democrats regained power in the state government, ending Reconstruction and implementing Jim Crow laws. The 1902 Constitution limited the number of white voters below 19th-century levels and effectively disfranchised blacks until federal civil rights legislation of the mid-1960s.
From the 1920s to the 1960s, the state was dominated by the Byrd Organization, with dominance by rural counties aligned in a Democratic party machine, but their hold was broken over their failed Massive Resistance to school integration. After World War II, the state's economy thrived, with a new industrial and urban base. A statewide community college system was developed. The first U.S. African-American governor since Reconstruction was Virginia's Douglas Wilder in 1990. Since the late 20th century, the contemporary economy has become more diversified in high-tech industries and defense-related businesses. Virginia's changing demography makes for closely divided voting in national elections but it is still generally conservative in state politics.
- 1 Prehistory
- 2 Early European exploration
- 3 Royal colony
- 4 Religion
- 5 American Revolution
- 6 Early Republic and antebellum periods
- 7 Civil War
- 8 Reconstruction
- 9 Gilded Age
- 10 Progressive Era
- 11 Interwar
- 12 WWII and Modern era
- 13 Contemporary commonwealth
- 14 Virginia history on stamps
- 15 See also
- 16 References
- 17 External links
For thousands of years before the arrival of the English, various societies of indigenous peoples inhabited the portion of the New World later designated by the English as "Virginia". Archaeological and historical research by anthropologist Helen C. Rountree and others has established 3,000 years of settlement in much of the Tidewater. Even so, a historical marker dedicated in 2015 states that recent archaeological work at Pocahontas Island has revealed prehistoric habitation dating to about 6500 BCE.
At the end of the 16th century, native inhabitants of what is now Virginia belonged to three major groups, classified by modern anthropologists chiefly on the basis of language-families. The largest group, the Algonquian, numbered over 10,000 and occupied most of the coastal area up to the fall line. Groups to the interior were the Iroquoian (numbering 2,500) and the Siouan. Tribes included the Algonquian Chesepian, Chickahominy, Doeg, Mattaponi, Nansemond, Pamunkey, Pohick, Powhatan, and Rappahannock; the Siouan Monacan and Saponi; and the Iroquoian-speaking Cherokee, Meherrin, Nottoway, and Tuscarora.
When the first English settlers arrived at Jamestown in 1607, Algonquian tribes controlled most of Virginia east of the fall line. Nearly all were united in what has been historically called the Powhatan Confederacy. Rountree has noted that "empire" more accurately describes their political structure. In the late 16th and early 17th centuries, a chief named Wahunsunacock created this powerful empire by conquering or affiliating with approximately thirty tribes whose territories covered much of what is now eastern Virginia. Known as the Powhatan, or paramount chief, he called this area Tenakomakah ("densely inhabited Land"). The empire was advantageous to some tribes, who were periodically threatened by other groups, such as the Monacan.
Early European exploration
|This section needs additional citations for verification. (February 2016) (Learn how and when to remove this template message)|
After their discovery of the New World in the 15th century, European states began trying to establish New World colonies. England, the Dutch Republic, France, Portugal, and Spain were the most active.
In 1540, a party led by two Spaniards, Juan de Villalobos and Francisco de Silvera, sent by Hernando de Soto, entered what is now Lee County in search of gold. In the spring of 1567, Hernando Moyano de Morales, a sergeant of Spanish explorer Juan Pardo, led a group of soldiers northward from Fort San Juan in Joara, a native town in what is now western North Carolina, to attack and destroy the Chisca village of Maniatique near present-day Saltville. The attack near Saltville was the first recorded battle in Virginia history.
Another Spanish party, captained by Antonio Velázquez in the caravel Santa Catalina, explored to the lower Chesapeake Bay region of Virginia in mid-1561 under the orders of Ángel de Villafañe. During this voyage, two Kiskiack or Paspahegh youths, including Don Luis were taken back to Spain. In 1566, an expedition sent from Spanish Florida by Pedro Menéndez de Avilés reached the Delmarva Peninsula. The expedition consisted of two Dominican friars, thirty soldiers and Don Luis, in a failed effort to set up a Spanish colony in the Chesapeake, believing it to be an opening to the fabled Northwest Passage.
In 1570, Spanish Jesuits established the Ajacán Mission on the lower peninsula. However, in 1571 it was destroyed by Don Luis and a party of his indigenous allies. In August 1572, Pedro Menéndez de Avilés arrived from St. Augustine with thirty soldiers and sailors to take revenge for the massacre of the Jesuits, and hanged approximately 20 natives. In 1573, the governor of Spanish Florida, Pedro Menéndez de Márquez, conducted further exploration of the Chesapeake. In the 1580s, captain Vicente González led several voyages into the Chesapeake in search of English settlements in the area. In 1609, Spanish Florida governor Pedro de Ibarra sent Francisco Fernández de Écija from St. Augustine to survey the activities of the Jamestown colonists, yet Spain never attempted a colony after the failure of the Ajacán Mission.
The Roanoke Colony was the first English colony in the New World. It was founded at Roanoke Island in what was then Virginia, now part of Dare County, North Carolina. Between 1584 and 1587, there were two major groups of settlers sponsored by Sir Walter Raleigh who attempted to establish a permanent settlement at Roanoke Island, and each failed. The final group disappeared completely after supplies from England were delayed three years by a war with Spain. Because they disappeared, they were called "The Lost Colony."
The name Virginia came from information gathered by the Raleigh-sponsored English explorations along what is now the North Carolina coast. Philip Amadas and Arthur Barlowe reported that a regional "king" named Wingina ruled a land of Wingandacoa. Queen Elizabeth modified the name to "Virginia", perhaps in part noting her status as the "Virgin Queen." Though the word is latinate, it stands as the oldest English language place-name in the United States.
On the second voyage, Raleigh discovered that, while the chief of the Secotans was indeed called Wingina, the expression wingandacoa, heard by the English upon arrival, actually meant "You wear good clothes" in Carolina Algonquian, and was not the native name of the country, as previously misunderstood.[page needed]
Virginia Company of London
After the death of Queen Elizabeth I, in 1603 King James I assumed the throne of England. After years of war, England was strapped for funds, so he granted responsibility for England's New World colonization to the Virginia Company, which became incorporated as a joint stock company by a proprietary charter drawn up in 1606. There were two competing branches of the Virginia Company and each hoped to establish a colony in Virginia in order to exploit gold (which the region did not actually have), to establish a base of support for English privateering against Spanish ships, and to spread Protestantism to the New World in competition with Spain's spread of Catholicism. Within the Virginia Company, the Plymouth Company branch was assigned a northern portion of the area known as Virginia, and the London Company area to the south.
In December 1606, the London Company dispatched a group of 104 colonists in three ships: the Susan Constant, Godspeed, and Discovery, under the command of Captain Christopher Newport. After a long, rough voyage of 144 days, the colonists finally arrived in Virginia on April 26, 1607 at the entrance to the Chesapeake Bay. At Cape Henry, they went ashore, erected a cross, and did a small amount of exploring, an event which came to be called the "First Landing."
Under orders from London to seek a more inland location safe from Spanish raids, they explored the Hampton Roads area and sailed up the newly christened James River to the fall line at what would later become the cities of Richmond and Manchester.
After weeks of exploration, the colonists selected a location and founded Jamestown on May 14, 1607. It was named in honor of King James I (as was the river). However, while the location at Jamestown Island was favorable for defense against foreign ships, the low and marshy terrain was harsh and inhospitable for a settlement. It lacked drinking water, access to game for hunting, or much space for farming. While it seemed favorable that it was not inhabited by the Native Americans, within a short time, the colonists were attacked by members of the local Paspahegh tribe.
The colonists arrived ill-prepared to become self-sufficient. They had planned on trading with the Native Americans for food, were dependent upon periodic supplies from England, and had planned to spend some of their time seeking gold. Leaving the Discovery behind for their use, Captain Newport returned to England with the Susan Constant and the Godspeed, and came back twice during 1608 with the First Supply and Second Supply missions. Trading and relations with the Native Americans was tenuous at best, and many of the colonists died from disease, starvation, and conflicts with the natives. After several failed leaders, Captain John Smith took charge of the settlement, and many credit him with sustaining the colony during its first years, as he had some success in trading for food and leading the discouraged colonists.
After Smith's return to England in August 1609, there was a long delay in the scheduled arrival of supplies. During the winter of 1609/10 and continuing into the spring and early summer, no more ships arrived. The colonists faced what became known as the "starving time". When the new governor Sir Thomas Gates, finally arrived at Jamestown on May 23, 1610, along with other survivors of the wreck of the Sea Venture that resulted in Bermuda being added to the territory of Virginia, he discovered over 80% of the 500 colonists had died; many of the survivors were sick.
Back in England, the Virginia Company was reorganized under its Second Charter, ratified on May 23, 1609, which gave most leadership authority of the colony to the governor, the newly appointed Thomas West, 3rd Baron De La Warr. In June 1610, he arrived with 150 men and ample supplies. De La Warr began the First Anglo-Powhatan War, against the natives. Under his leadership, Samuel Argall kidnapped Pocahontas, daughter of the Powhatan chief, and held her at Henricus.
The economy of the Colony was another problem. Gold had never been found, and efforts to introduce profitable industries in the colony had all failed until John Rolfe introduced his two foreign types of tobacco: Orinoco and Sweet Scented. These produced a better crop than the local variety and with the first shipment to England in 1612, the customers enjoyed the flavor, thus making tobacco a cash crop that established Virginia's economic viability.
The First Anglo-Powhatan War ended when Rolfe married Pocahontas in 1614.
George Yeardley took over as Governor of Virginia in 1619. He ended one-man rule and created a representative system of government with the General Assembly, the first elected legislative assembly in the New World.
Also in 1619, the Virginia Company sent 90 single women as potential wives for the male colonists to help populate the settlement. That same year the colony acquired a group of "twenty and odd" Angolans, brought by two English privateers. They were probably the first Africans in the colony. They, along with many European indentured servants helped to expand the growing tobacco industry which was already the colony's primary product. Although these black men were treated as indentured servants, this marked the beginning of America's history of slavery. Major importation of enslaved Africans by European slave traders did not take place until much later in the century.
In some areas, individual rather than communal land ownership or leaseholds were established, providing families with motivation to increase production, improve standards of living, and gain wealth. Perhaps nowhere was this more progressive than at Sir Thomas Dale's ill-fated Henricus, a westerly-lying development located along the south bank of the James River, where natives were also to be provided an education at the Colony's first college.
About 6 miles (9.7 km) south of the falls at present-day Richmond, in Henrico Cittie, the Falling Creek Ironworks was established near the confluence of Falling Creek, using local ore deposits to make iron. It was the first in North America.
Virginians were intensely individualistic at this point, weakening the small new communities. According to Breen (1979) their horizon was limited by the present or near future. They believed that the environment could and should be forced to yield quick financial returns. Thus everyone was looking out for number one at the expense of the cooperative ventures. Farms were scattered and few villages or towns were formed. This extreme individualism led to the failure of the settlers to provide defense for themselves against the Indians, resulting in two massacres.
Conflict with natives
English settlers soon came into conflict with the natives. Despite some successful interaction, issues of ownership and control of land and other resources, and trust between the peoples, became areas of conflict. Virginia has drought conditions an average of every three years. The colonists did not understand that the natives were ill-prepared to feed them during hard times. In the years after 1612, the colonists cleared land to farm export tobacco, their crucial cash crop. As tobacco exhausted the soil, the settlers continually needed to clear more land for replacement. This reduced the wooded land which Native Americans depended on for hunting to supplement their food crops. As more colonists arrived, they wanted more land.
The tribes tried to fight the encroachment by the colonists. Major conflicts took place in the Indian massacre of 1622 and the Second Anglo-Powhatan war, both under the leadership of the late Chief Powhatan's younger brother, Chief Opechancanough. By the mid-17th century, the Powhatan and allied tribes were in serious decline in population, due in large part to epidemics of newly introduced infectious diseases, such as smallpox and measles, to which they had no natural immunity. The European colonists had expanded territory so that they controlled virtually all the land east of the fall line on the James River. Fifty years earlier, this territory had been the empire of the mighty Powhatan Confederacy.
Surviving members of many tribes assimilated into the general population of the colony. Some retained small communities with more traditional identity and heritage. In the 21st century, the Pamunkey and Mattaponi are the only two tribes to maintain reservations originally assigned under the English. As of 2010[update], the state has recognized eleven Virginia Indian tribes. Others have renewed interest in seeking state and Federal recognition since the celebration of the 400th anniversary of Jamestown in 2007. State celebrations gave Native American tribes prominent formal roles to showcase their contributions to the state.
While the developments of 1619 and continued growth in the several following years were seen as favorable by the English, many aspects, especially the continued need for more land to grow tobacco, were the source of increasing concern to the Native Americans most affected, the Powhatan.
By this time, the remaining Powhatan Empire was led by Chief Opechancanough, chief of the Pamunkey, and brother of Chief Powhatan. He had earned a reputation as a fierce warrior under his brother's chiefdom. Soon, he gave up on hopes of diplomacy, and resolved to eradicate the English colonists.
On March 22, 1622, the Powhatan killed about 400 colonists in the Indian Massacre of 1622. With coordinated attacks, they struck almost all the English settlements along the James River, on both shores, from Newport News Point on the east at Hampton Roads all the way west upriver to Falling Creek, a few miles above Henricus and John Rolfe's plantation, Varina Farms.
At Jamestown, a warning by an Indian boy named Chanco to his employer, Richard Pace, helped reduce total deaths. Pace secured his plantation, and rowed across the river during the night to alert Jamestown, which allowed colonists some defensive preparation. They had no time to warn outposts, which suffered deaths and captives at almost every location. Several entire communities were essentially wiped out, including Henricus and Wolstenholme Towne at Martin's Hundred. At the Falling Creek Ironworks, which had been seen as promising for the Colony, two women and three children were among the 27 killed, leaving only two colonists alive. The facilities were destroyed.
Despite the losses, two thirds of the colonists survived; after withdrawing to Jamestown, many returned to the outlying plantations, although some were abandoned. The English carried out reprisals against the Powhatan and there were skirmishes and attacks for about a year before the colonists and Powhatan struck a truce.
The colonists invited the chiefs and warriors to Jamestown, where they proposed a toast of liquor. Dr. John Potts and some of the Jamestown leadership had poisoned the natives' share of the liquor, which killed about 200 men. Colonists killed another 50 Indians by hand.
The period between the coup of 1622 and another Powhatan attack on English colonists along the James River (see Jamestown) in 1644 marked a turning point in the relations between the Powhatan and the English. In the early period, each side believed it was operating from a position of power; by the Treaty of 1646, the colonists had taken the balance of power, and had established control between the York and Blackwater Rivers.
In 1624, the Virginia Company's charter was revoked and the colony transferred to royal authority as a crown colony, but the elected representatives in Jamestown continued to exercise a fair amount of power. Under royal authority, the colony began to expand to the North and West with additional settlements.
In 1634, a new system of local government was created in the Virginia Colony by order of the King of England. Eight shires were designated, each with its own local officers; these shires were renamed as counties only a few years later.
Governor Berkeley and English Civil War
The first significant attempts at exploring the Trans-Allegheny region occurred under the administration of Governor William Berkeley. Efforts to explore farther into Virginia were hampered in 1644 when about 500 colonists were killed in another Indian massacre led, once again, by Opechancanough. Berkeley is credited with efforts to develop others sources of income for the colony besides tobacco such as cultivation of mulberry trees for silkworms and other crops at his large Green Spring Plantation.
The colonists defined the 1644 coup as an "uprising". Chief Opechancanough expected the outcome would reflect what he considered the morally correct position: that the colonists were violating their pledges to the Powhatan. During the 1644 event, Chief Opechancanough was captured. While imprisoned, he was murdered by one of his guards. After the death of Opechancanough, and following the repeated colonial attacks in 1644 and 1645, the remaining Powhatan tribes had little alternative but to accede to the demands of the settlers.
Most Virginia colonists were loyal to the crown (Charles I) during the English Civil War, but in 1652, Oliver Cromwell sent a force to remove and replace Gov. Berkeley with Governor Richard Bennett, who was loyal to the Commonwealth of England. This governor was a moderate Puritan who allowed the local legislature to exercise most controlling authority, and spent much of his time directing affairs in neighboring Maryland Colony. Bennett was followed by two more "Cromwellian" governors, Edward Digges and Samuel Matthews, although in fact all three of these men were not technically appointees, but were selected by the House of Burgesses, which was really in control of the colony during these years.
Many royalists fled to Virginia after their defeat in the English Civil War. Some intermarried with existing plantation families to establish influential families in Virginia such as the Washingtons, Randolphs, Carters and Lees. However, most 17th-century immigrants were indentured servants, merchants or artisans. After the Restoration, in recognition of Virginia's loyalty to the crown, King Charles II of England bestowed Virginia with the nickname "The Old Dominion", which it still bears today.
Governor Berkeley, who remained popular after his first administration, returned to the governorship at the end of Commonwealth rule. However, Berkeley's second administration was characterized with many problems. Disease, hurricanes, Indian hostilities, and economic difficulties all plagued Virginia at this time. Berkeley established autocratic authority over the colony. To protect this power, he refused to have new legislative elections for 14 years in order to protect a House of Burgesses that supported him. He only agreed to new elections when rebellion became a serious threat.
Berkeley finally did face a rebellion in 1676. Indians had begun attacking encroaching settlers as they expanded to the north and west. Serious fighting broke out when settlers responded to violence with a counter-attack against the wrong tribe, which further extended the violence. Berkeley did not assist the settlers in their fight. Many settlers and historians believe Berkeley's refusal to fight the Indians stemmed from his investments in the fur trade. Large scale fighting would have cut off the Indian suppliers Berkeley's investment relied on. Nathaniel Bacon organized his own militia of settlers who retaliated against the Indians. Bacon became very popular as the primary opponent of Berkeley, not only on the issue of Indians, but on other issues as well. Berkeley condemned Bacon as a rebel, but pardoned him after Bacon won a seat in the House of Burgesses and accepted it peacefully. After a lack of reform, Bacon rebelled outright, captured Jamestown, and took control of the colony for several months. The incident became known as Bacon's Rebellion. Berkeley returned himself to power with the help of the English militia. Bacon burned Jamestown before abandoning it and continued his rebellion, but died of disease. Berkeley severely crushed the remaining rebels.
In response to Berkeley's harsh repression of the rebels, the English government removed him from office. After the burning of Jamestown, the capital was temporarily moved to Middle Plantation, located on the high ground of the Virginia Peninsula equidistant from the James and York Rivers.
Building of Williamsburg
Local leaders had long desired a school of higher education, for the sons of planters, and for educating the Indians. An earlier attempt to establish a permanent university at Henricus failed after the Indian Massacre of 1622 wiped out the entire settlement. Finally, seven decades later, with encouragement from the Colony's House of Burgesses and other prominent individuals, Reverend Dr. James Blair, the colony's top religious leader, prepared a plan. Blair went to England and in 1693, obtained a charter from Protestants King William and Queen Mary II of England who had just deposed Catholic James II of England in 1688 during the Glorious Revolution. The college was named the College of William and Mary in honor of the two monarchs.
The rebuilt statehouse in Jamestown burned again in 1698. After that fire, upon suggestion of college students, the colonial capital was permanently moved to nearby Middle Plantation again, and the town was renamed Williamsburg, in honor of the king. Plans were made to construct a capitol building and plan the new city according to the survey of Theodorick Bland.
As the English increasingly used tobacco products, tobacco in the American colonies became a significant economic force, especially in the tidewater region surrounding the Chesapeake Bay. Vast plantations were built along the rivers of Virginia, and social/economic systems developed to grow and distribute this cash crop. Some elements of this system included the importation and employment of slaves to grow crops. Planters would then fill large hogsheads with tobacco and convey them to inspection warehouses. In 1730, the Virginia House of Burgesses standardized and improved quality of tobacco exported by establishing the Tobacco Inspection Act of 1730, which required inspectors to grade tobacco at 40 specified locations.
In terms of the white population, the top five percent or so were planters who possessed growing wealth and increasing political power and social prestige. They controlled the local Anglican church, choosing ministers and handling church property and disbursing local charity. They sought initially obtained election to the house of purchases, or appointment has justice of the peace. About 60 percent of white Virginians were part of a broad middle class that owned substantial farms; By the second generation, death rates from malaria and other local diseases had declined so much that a stable family structure was possible. The bottom third owned no land, and verged on poverty. Many were recent arrivals, or recently released from indentured servitude. Social stratification was most severe in the Northern Neck, where the Fairfax family had been given a proprietorship. In some districts there 70 percent of the land was owned by a handful of families, and three fourths of the whites had no land at all. In the frontier districts, large numbers of Irish and German Protestants had settled, often moving down from Pennsylvania. Tobacco was not important there; farmers focused on hemp, grain, cattle, and horses. Entrepreneurs had begun to mine and smelt the local iron ores.
Sports occupied a great deal of attention at every social level, starting at the top. In England hunting was sharply restricted to landowners, and enforced by armed gameskeepers. In America, game was more than plentiful. Everyone—including servants and slaves—could and did hunt. Poor men with a good rifle aim won praise; rich gentlemen who were off target won ridicule. In 1691 Sir Francis Nicholson, the governor, organized competitions for the "better sort of Virginians onely who are Batchelors," and he offered prizes "to be shot for, wrastled, played at backswords, & Run for by Horse and foott." Horse racing was the main event. The typical farmer did not own a horse in the first place, and racing was a matter for gentlemen only, but ordinary farmers were spectators and gamblers. Selected slaves often became skilled horse trainers. Horse racing was especially important for knitting the gentry together. The race was a major public event designed to demonstrate to the world the superior social status of the gentry through expensive breeding, training, boasting and gambling, and especially winning the races themselves. Historian Timothy Breen explains that horse racing and high-stakes gambling were essential to maintaining the status of the gentry. When they publicly bet a large sum on their favorite horse, it told the world that competitiveness, individualism, and materialism where the core elements of gentry values.
Historian Edmund Morgan (1975) argues that Virginians in the 1650s—and for the next two centuries—turned to slavery and a racial divide as an alternative to class conflict. "Racism made it possible for white Virginians to develop a devotion to the equality that English republicans had declared to be the soul of liberty." That is, white men became politically much more equal than was possible without a population of low-status slaves.
By 1700 the population reached 70,000 and continued to grow rapidly from a high birth rate, low death rate, importation of slaves from the Caribbean, and immigration from Britain and Germany, as well as from Pennsylvania. The climate was mild, the farm lands were cheap and fertile.
Early to mid-1700s: Westward expansion
In 1716, Governor Alexander Spotswood led the Knights of the Golden Horseshoe Expedition, reaching the top ridge of the Blue Ridge Mountains at Swift Run Gap (elevation 2,365 feet (721 m)). Spotswood promoted Germanna, a settlement of German immigrants brought over for the purpose of iron production, in modern-day Orange County.
By the 1730s, the Three Notch'd Road extended from the vicinity of the fall line of the James River at the future site of Richmond westerly to the Shenandoah Valley, crossing the Blue Ridge Mountains at Jarmans Gap. Around this time, Governor William Gooch promoted settlement of the Virginia backcountry as a means to insulate the Virginia colony from Native American and New France settlements in the Ohio Country In response, a wide variety of settlers traveled southward on the Indian Trail later known as the Great Wagon Road along the Shenandoah Valley from Pennsylvania. Many, including German Palatines and Scotch-Irish American immigrants, settled along former Indian camps. According to Encyclopedia Virginia, "By 1735 there were as many as 160 families in the backcountry region, and within ten years nearly 10,000 Europeans lived in the Shenandoah Valley."
As colonial settlement moved into the piedmont area from the Tidewater/Chesapeake area, There was some uncertainty as to the exact tax boundaries of Virginia land versus the Land patent quit-rent rights held by Thomas Fairfax, 6th Lord Fairfax of Cameron in the Northern Neck Proprietary. When Robert "King" Carter died in 1732, Lord Fairfax read about his vast wealth in The Gentleman's Magazine and decided to settle the matter himself by coming to Virginia. Lord Fairfax travelled to Virginia for the first time between 1735 and 1737 to inspect and protect his lands. He employed a young George Washington (Washington's first employment) to survey his lands lying west of the Blue Ridge. Once this legal battle was ironed out, Frederick County, Virginia was founded in 1743 and the "Frederick Town" settlements there became a fourth city charter in Virginia, now known as Winchester, Virginia in February 1752.
In the late 1740s and the second half of the 18th century, the British angled for control of the Ohio Country. Virginians Thomas Lee and brothers Lawrence and Augustine Washington organized the Ohio Company to represent the prospecting and trading interests of Virginian investors. In 1749, the British Crown, via the colonial government of Virginia, granted the Ohio Company a great deal of this territory on the condition that it be settled by British colonists. Governor Robert Dinwiddie of Virginia was an investor in the Ohio Company, which stood to lose money if the French held their claim. To counter the French military presence in Ohio, in October 1753 Dinwiddie ordered the 21-year-old Major George Washington (whose brother was another Ohio Company investor) of the Virginia Regiment to warn the French to leave Virginia territory. Ultimately, many Virginians were caught up in the resulting French and Indian War that occurred 1754–1763. At the completion of the war, the Royal Proclamation of 1763 forbade all British settlement past a line drawn along the Appalachian Mountains, with the land west of the Proclamation Line known as the Indian Reserve.British colonists and land speculators objected to the proclamation boundary since the British government had already assigned land grants to them. Many settlements already existed beyond the proclamation line, some of which had been temporarily evacuated during Pontiac's War, and there were many already granted land claims yet to be settled. For example, George Washington and his Virginia soldiers had been granted lands past the boundary. Prominent American colonials joined with the land speculators in Britain to lobby the government to move the line further west. Their efforts were successful, and the boundary line was adjusted in a series of treaties with the Native Americans. In 1768 the Treaty of Fort Stanwix and the Treaty of Hard Labour, followed in 1770 by the Treaty of Lochaber, opened much of what is now Kentucky and West Virginia to British settlement within the Virginia Colony. However, the Northwest Territories north of the Ohio continued to be occupied by native tribes until US forces drove them out in the early decades of the 1800s.
- Further information: Episcopal Diocese of Virginia:History
The Church of England was legally established in the colony in 1619, and the Bishop of London sent in 22 Anglican clergyman by 1624. In practice, establishment meant that local taxes were funneled through the local parish to handle the needs of local government, such as roads and poor relief, in addition to the salary of the minister. There never was a bishop in colonial Virginia, and in practice the local vestry, consisting of gentry laymen controlled the parish. By the 1740s, the Anglicans had about 70 parish priests around the colony.
The stress on personal piety opened the way for the First Great Awakening in the mid 18th century, which pulled people away from the formal rituals of the established church. Especially in the back country, most families had no religious affiliation whatsoever and their low moral standards were shocking to proper Englishmen The Baptists, Methodists, Presbyterians and other evangelicals directly challenged these lax moral standards and refused to tolerate them in their ranks. Baptists, German Lutherans and Presbyterians, funded their own ministers, and favored disestablishment of the Anglican church.
The spellbinding preacher Samuel Davies led the Presbyterians, and converted hundreds of slaves. By the 1760s Baptists were drawing Virginians, especially poor white farmers, into a new, much more democratic religion. Slaves were welcome at the services and many became Baptists at this time. Methodist missionaries were also active in the late colonial period. Methodists encouraged an end to slavery, and welcomed free blacks and slaves into active roles in the congregations.
The Baptists and Presbyterians were subject to many legal constraints and faced growing persecution; between 1768 and 1774, about half of the Baptists ministers in Virginia were jailed for preaching, in defiance of England's Act of Toleration of 1689 that guaranteed freedom of worship for Protestants. At the start of the Revolution, the Anglican Patriots realized that they needed dissenter support for effective wartime mobilization, so they met most of the dissenters' demands in return for their support of the war effort.
Historians have debated the implications of the religious rivalries for the American Revolution. The struggle for religious toleration was played out during the American Revolution, as the Baptists, in alliance with Thomas Jefferson and James Madison, worked successfully to disestablish the Anglican church. After the American victory in the war, the Anglican establishment sought to reintroduce state support for religion. This effort failed when non-Anglicans gave their support to Jefferson's "Bill for Establishing Religious Freedom", which eventually became law in 1786 as the Virginia Statute for Religious Freedom. With freedom of religion the new watchword, the Church of England was dis-established in Virginia. It was rebuilt as the Episcopal Church in the United States, with no connection to Britain.
Revolutionary sentiments first began appearing in Virginia shortly after the French and Indian War ended in 1763. The Virginia legislature had passed the Two-Penny Act to stop clerical salaries from inflating. King George III vetoed the measure, and clergy sued for back salaries. Patrick Henry first came to prominence by arguing in the case of Parson's Cause against the veto, which he declared tyrannical.
The British government had accumulated a great deal of debt through spending on its wars. To help payoff this debt, Parliament passed the Sugar Act in 1764 and the Stamp Act in 1765. The General Assembly opposed the passage of the Sugar Act on the grounds of no taxation without representation, and in turn passing the "Virginia Resolves" opposing the tax. Governor Francis Fauquier responded by dismissing the Assembly. The Northampton County court overturned the Stamp Act February 8, 1766. Various political groups, including the Sons of Liberty met and issued protests against the act. Most notably, Richard Bland published a pamphlet entitled An Enquiry into the Rights of The British Colonies, setting forth the principle that Virginia was a part of the British Empire, not the Kingdom of Great Britain, so it only owed allegiance to the Crown, not Parliament.
The Stamp Act was repealed, but additional taxation from the Revenue Act and the 1769 attempt to transport Bostonian rioters to London for trial incited more protest from Virginia. The Assembly met to consider resolutions condemning on the transport of the rioters, but Governor Botetourt, while sympathetic, dissolved the legislature. The Burgesses reconvened in Raleigh Tavern and made an agreement to ban British imports. Britain gave up the attempt to extradite the prisoners and lifted all taxes except the tax on tea in 1770.
In 1773, because of a renewed attempt to extradite Americans to Britain, Richard Henry Lee, Thomas Jefferson, Patrick Henry, George Mason, and others in the legislature created a committee of correspondence to deal with problems with Britain.
After the House of Burgesses expressed solidarity with the actions in Massachusetts, the Governor, Lord Dunmore, again dissolved the legislature. The first Virginia Convention was held August 1–6 to respond to the growing crisis. The convention approved a boycott of British goods and elected delegates to the Continental Congress.
On April 20, 1775, Dunmore ordered the gunpowder removed from the Williamsburg Magazine to a British ship. Patrick Henry led a group of Virginia militia from Hanover in response to Dunmore's order. Carter Braxton negotiated a resolution to the Gunpowder Incident by transferring royal funds as payment for the powder. The incident exacerbated Dunmore's declining popularity. He fled the Governor's Palace to a British ship at Yorktown. On November 7, Dunmore issued a proclamation declaring Virginia was in a state of rebellion. By this time, George Washington had been appointed head of the American forces by the Continental Congress and Virginia was under the political leadership of a Committee of Safety formed by the Third Virginia Convention in the governor's absence.
On December 9, 1775, Virginia militia moved on the governor's forces at the Battle of Great Bridge, winning a victory in the small action there. Dunmore responded by bombarding Norfolk with his ships on January 1, 1776. After the Battle of Great Bridge, little military conflict took place on Virginia soil for the first part of the American Revolutionary War. Nevertheless, Virginia sent forces to help in the fighting to the North and South, as well as the frontier in the northwest.
The Fifth Virginia Convention met on May 6 and declared Virginia a free and independent state on May 15, 1776. The convention instructed its delegates to introduce a resolution for independence at the Continental Congress. Richard Henry Lee introduced the measure on June 7. While the Congress debated, the Virginia Convention adopted George Mason's Bill of Rights (June 12) and a constitution (June 29) which established an independent commonwealth. Congress approved Lee's proposal on July 2 and approved Jefferson's Declaration of Independence on July 4. The constitution of the Fifth Virginia Convention created a system of government for the state that would last for 54 years, and converting House of Burgesses into a bicameral legislature with both a House of Delegates and a Senate. Patrick Henry serves as the first Governor of the Commonwealth (1776-1779).
War returns to Virginia
The British briefly brought the war back to coastal Virginia in May 1779. Fearing the vulnerability of Williamsburg, Governor Thomas Jefferson moved the capital farther inland to Richmond in 1780. However, in December, Benedict Arnold, who had betrayed the Revolution and become a general for the British, attacked Richmond and burned part of the city before the Virginia Militia drove his army out of the city.
Arnold moved his base of operations to Portsmouth and was later joined by troops under General William Phillips. Phillips led an expedition that destroyed military and economic targets, against ineffectual militia resistance. The state's defenses, led by General Baron von Steuben, put up resistance in the April 1781 Battle of Blandford, but were forced to retreat. The French General Lafayette and his forces arrived to help defend Virginia, and though outnumbered, engaged British forces under General Charles Cornwallis in a series of skirmishes to help reduce their effectiveness. Cornwallis dispatched two smaller missions under Colonel John Graves Simcoe and Colonel Banastre Tarleton to march on Charlottesville and capture Gov. Jefferson and the legislature, though was foiled when Jack Jouett rode to warn Virginia government.
Cornwallis moved down the Virginia Peninsula towards the Chesapeake Bay, where Clinton planned to extract part of the army for a siege of New York City. After surprising American forces at the Battle of Green Spring on July 6, 1781, Cornwallis received orders to move his troops to the port town of Yorktown and begin construction of fortifications and a naval yard, though when discovered American forces surrounded the town. Gen. Washington and his French ally Rochambeau moved their forces from New York to Virginia. The defeat of the Royal Navy by Admiral de Grasse at the Battle of the Virginia Capes ensured French dominance of the waters around Yorktown, thereby preventing Cornwallis from receiving troops or supplies and removing the possibility of evacuation. Following the two-week siege to Yorktown, Cornwallis decided to surrender. Papers for surrender were officially signed on October 19.
As a result of the defeat, the king lost control of Parliament and the new British government offered peace in April 1782. The Treaty of Paris of 1783 officially ended the war.
Early Republic and antebellum periods
Victory in the Revolution brought peace and prosperity to the new state, as export markets in Europe reopened for its tobacco.
While the old local elites were content with the status quo, younger veterans of the war had developed a national identity. Led by George Washington and James Madison, Virginia played a major role in the Constitutional Convention of 1787 in Philadelphia. Madison proposed the Virginia Plan, which would give representation in Congress according to total population, including a proportion of slaves. Virginia was the most populous state, and it was allowed to count all of its white residents and 3/5 of the enslaved African Americans for its congressional representation and its electoral vote. (Only white men who owned a certain amount of property could vote.) Ratification was bitterly contested; the pro-Constitution forces prevailed only after promising to add a Bill of Rights. The Virginia Ratifying Convention approved the Constitution by a vote of 89–79 on June 25, 1788, making it the tenth state to enter the Union.
Madison played a central role in the new Congress, while Washington was the unanimous choice as first president. He was followed by the Virginia Dynasty, including Thomas Jefferson, Madison, and James Monroe, giving the state four of the first five presidents.
Slavery and freedmen in Antebellum Virginia
The Revolution meant change and sometimes political freedom for enslaved African Americans, too. Tens of thousands of slaves from southern states, particularly in Georgia and South Carolina, escaped to British lines and freedom during the war. Thousands left with the British for resettlement in their colonies of Nova Scotia and Jamaica; others went to England; others disappeared into rural and frontier areas or the North.
Inspired by the Revolution and evangelical preachers, numerous slaveholders in the Chesapeake region manumitted some or all of their slaves, during their lifetimes or by will. From 1,800 persons in 1782, the total population of free blacks in Virginia increased to 12,766 (4.3 percent of blacks) in 1790, and to 30,570 in 1810; the percentage change was from free blacks' comprising less than one percent of the total black population in Virginia, to 7.2 percent by 1810, even as the overall population increased. One planter, Robert Carter III freed more than 450 slaves in his lifetime, more than any other planter. George Washington freed all of his slaves at his death.
Many free blacks migrated from rural areas to towns such as Petersburg, Richmond, and Charlottesville for jobs and community; others migrated with their families to the frontier where social strictures were more relaxed. Among the oldest black Baptist congregations in the nation were two founded near Petersburg before the Revolution. Each congregation moved into the city and built churches by the early 19th century.
Twice slave rebellions broke out in Virginia: Gabriel's Rebellion in 1800, and Nat Turner's Rebellion in 1831. White reaction was swift and harsh, and militias killed many innocent free blacks and black slaves as well as those directly involved in the rebellions. After the second rebellion, the legislature passed laws restricting the rights of free people of color: they were excluded from bearing arms, serving in the militia, gaining education, and assembling in groups. As bearing arms and serving in the militia were considered obligations of free citizens, free blacks came under severe constraints after Nat Turner's rebellion.
As the new nation of the United States of America experienced growing pains and began to speak of Manifest Destiny, Virginia, too, found its role in the young republic to be changing and challenging. For one, the vast lands of the Virginia Colony were subdivided into other US states and territories. In 1784 Virginia relinquished its claims to the Illinois County, Virginia, except for the Virginia Military District (Southern Indiana). In 1775, Daniel Boone blazed a trail for the Transylvania Company from Fort Chiswell in Virginia through the Cumberland Gap into central Kentucky. This Wilderness Road became the principal route used by settlers for more than fifty years to reach Kentucky from the East. The fledgling US government rewarded veterans of the Revolutionary War with plots of land along the Ohio River in the Northwest Territory. In 1792, three western counties split off to form Kentucky.
A second influence: the lands seemed to be more fertile in the west. Virginia's heavy farming of tobacco for 200 years had depleted its soils.
The 1803 Louisiana Purchase only accelerated the westward movement of Virginians out of their native state. Many of the Virginians whose grandparents had created the Virginia Establishment began to emigrate and settle westward. Famous Virginian-born Americans affected not only the destiny of the state of Virginia, but the rapidly developing American Old West. Virginians Meriwether Lewis and William Clark were influential in their famous 1804-1806 expedition to explore the Missouri River and possible connections to the Pacific Ocean. Notable names such as Stephen F. Austin, Edwin Waller, Haden Harrison Edwards, and Dr. John Shackelford were famous Texan pioneers from Virginia. Even eventual Civil War general Robert E. Lee distinguished himself as a military leader in Texas during the 1846–48 Mexican–American War.
Historians estimate that one million Virginians left the commonwealth between the Revolution and the Civil War. With this exodus, Virginia experienced a decline in both population and political influence Prominent Virginians formed the Virginia Historical and Philosophical Society to preserve the legacy and memory of its past. At the same time, with Virginians settling so much of the west, they brought their cultural habits with them. Today, many cultural features of the American South can be attributed to Virginians who migrated west.
Cultural divide between Tidewater planters and Western Virginia farmers
As the western reaches of Virginia were developed in the first half of the 19th century, the vast differences in the agricultural basis, cultural, and transportation needs of the area became a major issue for the Virginia General Assembly. In the older, eastern portion, slavery contributed to the economy. While planters were moving away from labor-intensive tobacco to mixed crops, they still held numerous slaves and their leasing out or sales was also part of their economic prospect. Slavery had become an economic institution upon which planters depended. Watersheds on most of this area eventually drained to the Atlantic Ocean. In the western reaches, families farmed smaller homesteads, mostly without enslaved or hired labor. Settlers were expanding the exploitation of resources: mining of minerals and harvesting of timber. The land drained into the Ohio River Valley, and trade followed the rivers.
Representation in the state legislature was heavily skewed in favor of the more populous eastern areas and the historic planter elite. This was compounded by the partial allowance for slaves when counting population; as neither the slaves nor women had the vote, this gave more power to white men. The legislature's efforts to mediate the disparities ended without meaningful resolution, although the state held a constitutional convention on representation issues. Thus, at the outset of the American Civil War, Virginia was caught not only in national crisis, but in a long-standing controversy within its own boundaries. While other border states had similar regional differences, Virginia had a long history of east-west tensions which finally came to a head; it was the only state to divide into two separate states during the War.
Infrastructure and Industrial Revolution
After the Revolution, various infrastructure projects began to be developed, including the Dismal Swamp Canal, the James River and Kanawha Canal, and various turnpikes. Virginia was home to the first of all Federal infrastructure projects under the new Constitution, the Cape Henry Light of 1792, located at the mouth of the Chesapeake Bay. Following the War of 1812, several Federal national defense projects were undertaken in Virginia. Drydock Number One was constructed in Portsmouth in the 1827. Across the James River, Fort Monroe was built to defend Hampton Roads, completed in 1834.
In the 1830s, railroads began to be built in Virginia. In 1831, the Chesterfield Railroad began hauling coal from the mines in Midlothian to docks at Manchester (near Richmond), powered by gravity and draft animals. The first railroad in Virginia to be powered by locomotives was the Richmond, Fredericksburg and Potomac Railroad, chartered in 1834, with the intent to connect with steamboat lines at Aquia Landing running to Washington, D.C.. Soon after, others (with equally descriptive names) followed: the Richmond and Petersburg Railroad and Louisa Railroad in 1836, the Richmond and Danville Railroad in 1847, the Orange and Alexandria Railroad in 1848, and the Richmond and York River Railroad. In 1849, the Virginia Board of Public Works established the Blue Ridge Railroad. Under Engineer Claudius Crozet, the railroad successfully crossed the Blue Ridge Mountains via the Blue Ridge Tunnel at Afton Mountain.
Petersburg became a manufacturing center, as well as a city where free black artisans and craftsmen could make a living. In 1860 half its population was black and of that, one-third were free blacks, the largest such population in the state.
With extensive iron deposits, especially in the western counties, Virginia was a pioneer in the iron industry. The first ironworks in the new world was established at Falling Creek in 1619, though it was destroyed in 1622. There would eventually grow to be 80 ironworks, charcoal furnaces and forges with 7,000 hands at any one time, about 70 percent of them slaves. Ironmasters hired slaves from local slave owners because they were cheaper than white workers, easier to control, and could not switch to a better employer. But the work ethic was weak, because the wages went to the owner, not to the workers, who were forced to work hard, were poorly fed and clothed, and were separated from their families. Virginia's industry increasingly fell behind Pennsylvania, New Jersey and Ohio, which relied on free labor. Bradford (1959) recounts the many complaints about slave laborers and argues the over-reliance upon slaves contributed to the failure of the ironmasters to adopt improved methods of production for fear the slaves would sabotage them. Most of the blacks were unskilled manual laborers, although Lewis (1977) reports that some were in skilled positions.
Virginia at first refused to join the Confederacy, but did so after President Lincoln on April 15 called for troops from all states; that meant Federal troops crossing Virginia on the way south to subdue South Carolina. On April 17, 1861 the convention voted to secede, and voters ratified the decision on May 23. Immediately the Union army moved into northern Virginia and captured Alexandria without a fight, and controlled it for the remainder of the war. The Wheeling area had opposed secession and remained strong for the Union.
Because of its strategic significance, the Confederacy relocated its capital to Richmond. Richmond was at the end of a long supply line and as the highly symbolic capital of the Confederacy became the main target of round after round of invasion attempts. A major center of iron production during the civil war was located in Richmond at Tredegar Iron Works, which produced most of the artillery for the war. The city was the site of numerous army hospitals. Libby Prison for captured Union officers gained an infamous reputation for the overcrowded and harsh conditions, with a high death rate. Richmond's main defenses were trenches built surrounding it down towards the nearby city of Petersburg. Saltville was a primary source of Confederate salt (critical for food preservation) during the war, leading to the two Battles of Saltville.
The first major battle of the Civil War occurred on July 21, 1861. Union forces attempted to take control of the railroad junction at Manassas, but the Confederate Army reached it first and won the First Battle of Manassas (known as "Bull Run" in Northern naming convention). Both sides mobilized for war; the year 1861 went on without another major fight.
Men from all economic and social levels, both slaveholders and nonslaveholders, as well as former Unionists, enlisted in great numbers on both sides. Areas, especially in the west and along the border, that sent few men to the Confederacy were characterized by few slaves, poor economies, and a history of reinal antagonism to the Tidewater.
West Virginia breaks away
The western counties could not tolerate the Confederacy. Breaking away, they first formed the Union state of Virginia (recognized by Washington); it is called the Restored government of Virginia and was based in Alexandria, across the river from Washington. The Restored government did little except give its permission for Congress to form the new state of West Virginia in 1862. From May to August 1861, a series of Unionist conventions met in Wheeling; the Second Wheeling Convention constituted itself as a legislative body called the Restored Government of Virginia. It declared Virginia was still in the Union but that the state offices were vacant and elected a new governor, Francis H. Pierpont; this body gained formal recognition by the Lincoln administration on July 4. On August 20 the Wheeling body passed an ordinance for the creation; it was put to public vote on Oct. 24. The vote was in favor of a new state—West Virginia—which was distinct from the Pierpont government, which persisted until the end of the war. Congress and Lincoln approved, and, after providing for gradual emancipation of slaves in the new state constitution, West Virginia became the 35th state on June 20, 1863. In effect there were now three states: the Confederate Virginia, the Union Restored Virginia, and West Virginia.
The state and national governments in Richmond did not recognize the new state, and Confederates did not vote there. The Confederate government in Richmond sent in Robert E. Lee. But Lee found little local support and was defeated by Union forces from Ohio. Union victories in 1861 drove the Confederate forces out of the Monongahela and Kanawha valleys, and throughout the remainder of the war the Union held the region west of the Alleghenies and controlled the Baltimore and Ohio Railroad in the north. The new state was not subject to Reconstruction.
Later war years
For the remainder of the war, many major battles were fought across Virginia, including the Seven Days Battles, the Battle of Fredericksburg, the Battle of Chancellorsville, the Battle of Brandy Station
Over the course of the War, despite occasional tactical victories and spectacular counter-stroke raids, Confederate control of many regions of Virginia was gradually lost to Federal advance. By October 1862 the northern 9th and 10th Congressional districts along the Potomac were under Union control. Eastern Shore, Northern, Middle and Lower Peninsula and the 2nd congressional district surrounding Norfolk west to Suffolk were permanently Union-occupied by May. Other regions, such as the Piedmont and Shenandoah Valley, regularly changed hands through numerous campaigns.
In 1864, the Union Army planned to attack Richmond by a direct overland approach through Overland Campaign and the Battle of the Wilderness, culminating in the Siege of Petersburg which lasted from the summer of 1864 to April 1865. By November 6, 1864, Confederate forces controlled only four of Virginia's 16 congressional districts in the region of Richmond-Petersburg and their Southside counties.
In April 1865, Richmond was burned by a retreating Confederate Army ; Lincoln walked the city streets to cheering crowds of newly freed blacks. The Confederate government fled south, pausing in Danville for a few days. The end came when Lee surrendered to Ulysses Grant at Appomattox on April 9, 1865.
Virginia had been devastated by the war, with the infrastructure (such as railroads) in ruins; many plantations burned out; and large numbers of refugees without jobs, food or supplies beyond rations provided by the Union Army, especially its Freedmen's Bureau.
Historian Mary Farmer-Kaiser reports that white landowners complained to the Bureau about unwillingness of freedwomen to work in the fields as evidence of their laziness, and asked the Bureau to force them to sign labor contracts. In response, many Bureau officials "readily condemned the withdrawal of freedwomen from the work force as well as the 'hen pecked' husbands who allowed it." While the Bureau did not force freedwomen to work, it did force freedmen to work or be arrested as vagrants. Furthermore, agents urged poor unmarried mothers to give their older children up as apprentices to work for white masters. Farmer-Kaiser concludes that "Freedwomen found both an ally and an enemy in the bureau."
There were three phases in Virginia's Reconstruction era: wartime, presidential, and congressional. Immediately after the war President Andrew Johnson recognized the Francis Harrison Pierpont government as legitimate and restored local government. The Virginia legislature passed Black Codes that severely restricted Freedmen's mobility and rights; they had only limited rights and were not considered citizens, nor could they vote. The state ratified the 13th amendment to abolish slavery and revoked the 1861 ordnance of secession. Johnson was satisfied that Reconstruction was complete.
Other Republicans in Congress refused to seat the newly elected state delegation; the Radicals wanted better evidence that slavery and similar methods of serfdom had been abolished, and the freedmen given rights of citizens. They also were concerned that Virginia leaders had not renounced Confederate nationalism. After winning large majorities in the 1866 national election, the Radical Republicans gained power in Congress. They put Virginia (and nine other ex-Confederate states) under military rule. Virginia was administered as the "First Military District" in 1867–69 under General John Schofield Meanwhile, the Freedmen became politically active by joining the pro-Republican Union League, holding conventions, and demanding universal male suffrage and equal treatment under the law, as well as demanding disfranchisement of ex-Confederates and the seizure of their plantations. McDonough, finding that Schofield was criticized by conservative whites for supporting the Radical cause on the one hand, and attacked on the other by Radicals for thinking black suffrage was premature on the other, concludes that "he performed admirably' by following a middle course between extremes.
Increasingly a deep split opened up in the republican ranks. The moderate element had national support and called itself "True Republicans." The more radical element set out to disfranchise whites—such as not allowing a man to hold office if he was a private in the Confederate army, or had sold food to the Confederate government, plus land reform. About 20,000 former Confederates were denied the right to vote in the 1867 election. In 1867 radical James Hunnicutt (1814–1880), a white preacher, editor and Scalawag (white Southerners supporting Reconstruction) mobilized the black Republican vote by calling for the confiscation of all plantations and turning the land over to Freedmen and poor whites. The "True Republicans" (the moderates), led by former Whigs, businessmen and planters, while supportive of black suffrage, drew the line at property confiscation. A compromise was reached calling for confiscation if the planters tried to intimidate black voters. Hunnicutt's coalition took control of the Republican Party, and began to demand the permanent disfranchisement of all whites who had supported the Confederacy. The Virginia Republican party became permanently split, and many moderate Republicans switched to the opposition "Conservatives". The Radicals won the 1867 election for delegates to a constitutional convention.
The 1868 constitutional convention included 33 white Conservatives, and 72 Radicals (of whom 24 were Blacks, 23 Scalawag, and 21 Carpetbaggers. Called the "Underwood Constitution" after the presiding officer, the main accomplishment was to reform the tax system, and create a system of free public schools for the first time in Virginia. After heated debates over disfranchising Confederates, the Virginia legislature approved a Constitution that excluded ex-Confederates from holding office, but allowed them to vote in state and federal elections.
Under pressure from national Republicans to be more moderate, General Schofield continued to administer the state through the Army. He appointed a personal friend, Henry H. Wells as provisional governor. Wells was a Carpetbagger and a former Union general. Schofield and Wells fought and defeated Hunnicutt and the Scalawag Republicans. They took away contracts for state printing orders from Hunnicutt's newspaper. The national government ordered elections in 1869 that included a vote on the new Underwood constitution, a separate one on its two disfranchisement clauses that would have permanently stripped the vote from most former rebels, and a separate vote for state officials. The Army enrolled the Freedmen (ex-slaves) as voters but would not allow some 20,000 prominent whites to vote or hold office. The Republicans nominated Wells for governor, as Hunnicutt and most Scalawags went over to the opposition.
The leader of the moderate Republicans, calling themselves "True Republicans," was William Mahone (1826–1895), a railroad president and former Confederate general. He formed a coalition of white Scalawag Republicans, some blacks, and ex-Democrats who formed the Conservative Party. Mahone recommended that whites had to accept the results of the war, including civil rights and the vote for Freedmen. Mahone convinced the Conservative Party to drop its own candidate and endorse Gilbert C. Walker, Mahone's candidate for governor. In return, Mahone's people endorsed Conservatives for the legislative races. Mahone's plan worked, as the voters in 1869 elected Walker and defeated the proposed disfranchisement of ex-Confederates.
When the new legislature ratified the 14th and 15th amendments to the U.S. Constitution, Congress seated its delegation, and Virginia Reconstruction came to an end in January 1870. The Radical Republicans had been ousted in a non-violent election. Virginia was the only southern state that did not elect a civilian government that represented more Radical Republican principles. Suffering from widespread destruction and difficulties in adapting to free labor, white Virginians generally came to share the postwar bitterness typical of the southern attitudes. Historian Richard Lowe argues that the obstacles faced by the Radical Republican movement made their cause hopeless:
- even more damaging to Republicans' prospects than their poverty, their inexperience in state politics, their isolation from potential allies, and their identification with the heated North was the perverse and powerful racism that ran so powerfully through the white community. The great majority of the Old Dominion's white citizens could not take seriously a political party composed primarily of former slaves.
Railroad and industrial growth
In addition to those that were rebuilt, new railroads developed after the Civil War. In 1868, under railroad baron Collis P. Huntington, the Virginia Central Railroad was merged and transformed into the Chesapeake and Ohio Railroad. In 1870, several railroads were merged to form the Atlantic, Mississippi and Ohio Railroad, later renamed Norfolk & Western. In 1880, the towpath of the now-defunct James River & Kanawha canal was transformed into the Richmond and Allegheny Railroad, which within a decade would merge into the Chesapeake & Ohio. Others would include the Southern Railroad, the Seaboard Air Line, and the Atlantic Coast Line; still others would eventually reach into Virginia, including the Baltimore & Ohio and the Pennsylvania Railroad. The rebuilt Richmond, Fredericksburg, and Potomac Railroad eventually was linked to Washington, D.C..
In the 1880s, the Pocahontas Coalfield opened up in far southwest Virginia, with others to follow, in turn providing more demand for railroads transportation. In 1909, the Virginian Railway opened, built for the express purpose of hauling coal from the mountains of West Virginia to the ports at Hampton Roads. The growth of railroads resulted in the creation of new towns and rapid growth of others, including Clifton Forge, Roanoke, Crewe and Victoria. The railroad boom was not without incident: the Wreck of the Old 97 occurred just north of Danville, Virginia in 1903, later immortalized by a popular ballad.
With the invention of the cigarette rolling machine, and the great increase in smoking in the early 20th century, cigarettes and other tobacco products became a major industry in Richmond and Petersburg. Tobacco magnates such as Lewis Ginter funded a number of public institutions.
Readjustment, public education, segregation
A division among Virginia politicians occurred in the 1870s, when those who supported a reduction of Virginia's pre-war debt ("Readjusters") opposed those who felt Virginia should repay its entire debt plus interest ("Funders"). Virginia's pre-war debt was primarily for infrastructure improvements overseen by the Virginia Board of Public Works, much of which were destroyed during the war or in the new State of West Virginia.
After his unsuccessful bid for the Democratic nomination for governor in 1877, former confederate General and railroad executive William Mahone became the leader of the "Readjusters", forming a coalition of conservative Democrats and white and black Republicans. The so-called Readjusters aspired "to break the power of wealth and established privilege" and to promote public education. The party promised to "readjust" the state debt in order to protect funding for newly established public education, and allocate a fair share to the new State of West Virginia. Its proposal to repeal the poll tax and increase funding for schools and other public facilities attracted biracial and cross-party support.
The Readjuster Party was successful in electing its candidate, William E. Cameron as governor, and he served from 1882 to 1886. Mahone served as a Senator in the U.S. Congress from 1881 to 1887, as well as fellow Readjustor Harrison H. Riddleberger, who served in the U.S. Senate from 1883 to 1889. Readjusters' effective control of Virginia politics lasted until 1883, when they lost majority control in the state legislature, followed by the election of Democrat Fitzhugh Lee as governor in 1885. The Virginia legislature replaced both Mahone and Riddleberger in the U.S. Senate with Democrats.
In 1888 the exception to Readjustor and Democratic control was John Mercer Langston, who was elected to Congress from the Petersburg area on the Republican ticket. He was the first black elected to Congress from the state, and the last for nearly a century. He served one term. A talented and vigorous politician, he was an Oberlin College graduate. He had long been active in the abolitionist cause in Ohio before the Civil War, had been president of the National Equal Rights League from 1864 to 1868, and had headed and created the law department at Howard University, and acted as president of the college. When elected, he was president of what became Virginia State University.
While the Readjustor Party faded, the goal of public education remained strong, with institutions established for the education of schoolteachers. In 1884, the state acquired a bankrupt women's college at Farmville and opened it as a normal school. Growth of public education led to the need for additional teachers. In 1908, two additional normal schools were established, one at Fredericksburg and one at Harrisonburg, and in 1910, one at Radford.
After the Readjuster Party disappeared, Virginia Democrats rapidly passed legislation and constitutional amendments that effectively disfranchised African Americans and many poor whites, through the use of poll taxes and literacy tests. They created white, one-party rule under the Democratic Party for the next 80 years. White state legislators passed statutes that restored white supremacy through imposition of Jim Crow segregation. In 1902 Virginia passed a new constitution that reduced voter registration.
The Progressive Era after 1900 brought numerous reforms, designed to modernize the state, increase efficiency, apply scientific methods, promote education and eliminate waste and corruption.
A key leader was Governor Claude Swanson (1906–10), a Democrat who left machine politics behind to win office using the new primary law. Swanson's coalition of reformers in the legislature, built schools and highways, raised teacher salaries and standards, promoted the state's public health programs, and increased funding for prisons. Swanson fought against child labor, lowered railroad rates and raised corporate taxes, while systematizing state services and introducing modern management techniques. The state funded a growing network of roads, with much of the work done by black convicts in chain gangs. After Swanson moved to the U.S. Senate in 1910 he promoted Progressivism at the national level as a supporter of President Woodrow Wilson, who had been born in Virginia and was considered a native son. Swanson, as a power on naval affairs, promoted the Norfolk Navy Yard and Newport News Ship Building and Drydock Corporation. Swanson's statewide organization evolved into the "Byrd Organization."
The State Corporation Commission (SCC) was formed as part of the 1902 Constitution, over the opposition of the railroads, to regulate railroad policies and rates. The SCC was independent of parties, courts, and big businesses, and was designed to maximize the public interest. It became an effective agency, which especially pleased local merchants by keeping rates low.
Virginia has a long history of agricultural reformers, and the Progressive Era stimulated their efforts. Rural areas suffered persistent problems, such as declining populations, widespread illiteracy, poor farming techniques, and debilitating diseases among both farm animals and farm families. Reformers emphasized the need to upgrade the quality of elementary education. With federal help, in they set up a county agent system (today the Virginia Cooperative Extension) that taught farmers the latest scientific methods for dealing with tobacco and other crops, and farm house wives how to maximize their efficiency in the kitchen and nursery.
Some upper-class women, typified by Lila Meade Valentine of Richmond, promoted numerous Progressive reforms, including kindergartens, teacher education, visiting nurses programs, and vocational education for both races. Middle-class white women were especially active in the Prohibition movement. The woman suffrage movement became entangled in racial issues—whites were reluctant to allow black women the vote—and was unable to broaden its base beyond middle-class whites. Virginia women got the vote in 1920, the result of a national constitutional amendment.
In higher education, the key leader was Edwin A. Alderman, president of the University of Virginia, 1904–31. His goal was the transformation of the southern university into a force for state service and intellectual leadership. and educational utility. Alderman successfully professionalized and modernized the state's system of higher education. He promoted international standards of scholarship, and a statewide network of extension services. Joined by other college presidents, he promoted the Virginia Education Commission, created in 1910. Alderman's crusade encountered some resistance from traditionalists, and never challenged the Jim Crow system of segregated schooling.
While the progressives were modernizers, there was also a surge of interest in Virginia traditions and heritage, especially among the aristocratic First Families of Virginia (FFV). The Association for the Preservation of Virginia Antiquities (APVA), founded in Williamsburg in 1889, emphasized patriotism in the name of Virginia's 18th-century Founding Fathers. In 1907, the Jamestown Exposition was held near Norfolk to celebrate the tricentennial of the arrival of the first English colonists and the founding of Jamestown.
Attended by numerous federal dignitaries, and serving as the launch point for the Great White Fleet, the Jamestown Exposition also spurred interest in the military potential of the area. The site of the exposition would later become, in 1917, the location of the Norfolk Naval Station. The proximity to Washington, D.C., the moderate climate, and strategic location of a large harbor at the center of the Atlantic seaboard made Virginia a key location during World War I for new military installations. These included Fort Story, the Army Signal Corps station at Langley, Quantico Marine Base in Prince William County, Fort Belvoir in Fairfax County, Fort Lee near Petersburg and Fort Eustis, in Warwick County (now Newport News). At the same time, heavy shipping traffic made the area a target for U-boats, and a number of merchant vessels were attacked or sunk off the Virginia coast.
|This section needs expansion. You can help by adding to it. (November 2009)|
Temperance became an issue in the early 20th century. In 1916, a statewide referendum passed to outlaw the consumption of alcohol. This was overturned in 1933.
After 1930, tourism began to grow with the development of Colonial Williamsburg.
Shenandoah National Park was constructed from newly gathered land, as well as the Blue Ridge Parkway and Skyline Drive. The Civilian Conservation Corps played a major role in developing that National Park, as well as Pocahontas State Park. By 1940 new highway bridges crossed the lower Potomac, Rappahannock, York, and James Rivers, bringing to an end the long-distance steamboat service which had long served as primary transportation throughout the Chesapeake Bay area. Ferryboats remain today in only a few places.
Blacks comprised a third of the population but lost nearly all their political power. The electorate was so small that from 1905 to 1948 government employees and officeholders cast a third of the votes in state elections. This small, controllable electorate facilitated the formation of a powerful statewide political machine by Harry Byrd (1887–1966), which dominated from the 1920s to the 1960s. Most of the blacks who remained politically active supported the Byrd organization, which in turn protected their right to vote, making Virginia's race relations the most harmonious in the South before the 1950s, according to V.O. Key. Not until Federal civil rights legislation was passed in 1964 and 1965 did African Americans recover the power to vote and the protection of other basic constitutional civil rights.
WWII and Modern era
The economic stimulus of the World War brought full employment for workers, high wages, and high profits for farmers. It brought in many thousands of soldiers and sailors for training. Virginia sent 300,000 men and 4,000 women to the services. The buildup for the war greatly increased the state's naval and industrial economic base, as did the growth of federal government jobs in Northern Virginia and adjacent Washington, DC. The Pentagon was built in Arlington as the largest office building in the world. Additional installations were added: in 1941, Fort A.P. Hill and Fort Pickett opened, and Fort Lee was reactivated. The Newport News shipyard expanded its labor force from 17,000 to 70,000 in 1943, while the Radford Arsenal had 22,000 workers making explosives. Turnover was very high—in one three-month period the Newport News shipyard hired 8400 new workers as 8,300 others quit.
Cold War and Space Age
In addition to general postwar growth, the Cold War resulted in further growth in both Northern Virginia and Hampton Roads. With the Pentagon already established in Arlington, the newly formed Central Intelligence Agency located its headquarters further afield at Langley (unrelated to the Air Force Base). In the early 1960s, the new Dulles International Airport was built, straddling the Fairfax County-Loudoun County border. Other sites in Northern Virginia included the listening station at Vint Hill. Due to the presence of the U.S. Atlantic Fleet in Norfolk, in 1952 the Allied Command Atlantic of NATO was headquartered there, where it remained for the duration of the Cold War. Later in the 1950s and across the river, Newport News Shipbuilding would begin construction of the USS Enterprise—the world's first nuclear-powered aircraft carrier—and the subsequent atomic carrier fleet.
Virginia also witnessed American efforts in the Space Race. When the National Advisory Committee for Aeronautics was transformed into the National Aeronautics and Space Administration in 1958, the resulting Space Task Group headquartered at the laboratories of Langley Research Center. From there, it would initiate Project Mercury, and would remain the headquarters of the U.S. manned spaceflight program until its transfer to Houston in 1962. On the Eastern Shore, near Chincoteague, Wallops Flight Facility served as a rocket launch site, including the launch of Little Joe 2 on December 4, 1959, which sent a rhesus monkey, Sam, into suborbital spaceflight. Langley later oversaw the Viking program to Mars.
The new U.S. Interstate highway system begun in the 1950s and the new Hampton Roads Bridge-Tunnel in 1958 helped transform Virginia Beach from a tiny resort town into one of the state's largest cities by 1963, and spurring the growth of the Hampton Roads region linked by the Hampton Roads Beltway. In the western portion of the state, completion of north-south Interstate 81 brought better access and new businesses to dozens of counties over a distance of 300 miles (480 km) as well as facilitating travel by students at the many Shenandoah area colleges and universities. The creation of Smith Mountain Lake, Lake Anna, Claytor Lake, Lake Gaston, and Buggs Island Lake, by damming rivers, attracted many retirees and vacationers to those rural areas. As the century drew to a close, Virginia tobacco growing gradually declined due to health concerns, although not at steeply as in Southern Maryland. A state community college system brought affordable higher education within commuting distance of most Virginians, including those in remote, underserved localities. Other new institutions were founded, most notably George Mason University and Liberty University. Localities such as Danville and Martinsville suffered greatly as their manufacturing industries closed.
Massive resistance and Civil Rights
The state government orchestrated systematic resistance to federal court orders requiring the end of segregation. The state legislature even enacted a package of laws, known as the Stanley plan, to try to evade racial integration in public schools. Prince Edward County even closed all its public schools in an attempt to avoid racial integration, but relented in the face of U.S. Supreme Court rulings. The first black students attended the University of Virginia School of Law in 1950, and Virginia Tech in 1953. In 2008, various actions of the Civil Rights Movement were commemorated by the Virginia Civil Rights Memorial in Richmond.
By the 1980s, Northern Virginia and the Hampton Roads region had achieved the greatest growth and prosperity, chiefly because of employment related to Federal government agencies and defense, as well as an increase in technology in Northern Virginia. Shipping through the Port of Hampton Roads began expansion which continued into the early 21st century as new container facilities were opened. Coal piers in Newport News and Norfolk had recorded major gains in export shipments by August 2008. The recent expansion of government programs in the areas near Washington has profoundly affected the economy of Northern Virginia whose population has experienced large growth and great ethnic/ cultural diversification, exemplified by communities such as Tysons Corner, Reston and dense, urban Arlington. The subsequent growth of defense projects has also generated a local information technology industry. In recent years, intolerably heavy commuter traffic and the urgent need for both road and rail transportation improvements have been a major issue in Northern Virginia. The Hampton Roads region has also experienced much growth, as have the western suburbs of Richmond in both Henrico and Chesterfield Counties.
Virginia served as a major center for information technology during the early days of the Internet and network communication. Internet and other communications companies clustered in the Dulles Corridor. By 1993, the Washington area had the largest amount of Internet backbone and the highest concentration of Internet service providers. In 2000, more than half of all Internet traffic flowed along the Dulles Toll Road, and by 2016 70% of the world's internet traffic flowed through Loudoun County. Bill von Meister founded two Virginia companies that played major roles in the commercialization of the Internet: McLean, Virginia based The Source and Control Video Corporation, forerunner of America Online. While short-lived, The Source was one of the first online service providers alongside CompuServe. On hand for the launch of The Source, Isaac Asimov remarked "This is the beginning of the information age." The Source helped pave the way for future online service providers including another Virginia company founded by von Meister, America Online (AOL). AOL became the largest provider of Internet access during the Dial-up era of Internet access. AOL maintained a Virginia headquarters until the then-struggling company moved in 2007.
In 2006 former Governor of Virginia Mark Warner gave a speech and interview in the massively multiplayer online game Second Life, becoming the first politician to appear in a video game. In 2007 Virginia speedily passed the nation's first spaceflight act by a vote of 99–0 in the House of Delegates. Northern Virginia company Space Adventures is currently the only company in the world offering space tourism. In 2008 Virginia became the first state to pass legislation on Internet safety, with mandatory educational courses for 11- to 16-year-olds.
In 2013, by a slight margin in the Virginia Governor's race, the state of Virginia broke a long acclaimed streak of choosing a governor against the incumbent party within the White House. For the first time in more than thirty years will the Governor and the President be from the same party.
Virginia history on stamps
Stamps of Virginia events and landmarks include
• Jamestown founding
• Mount Vernon
• Stratford Hall
- Colonial South and the Chesapeake
- Colony of Virginia
- Constitution of Virginia
- Former counties, cities, and towns of Virginia
- History of Richmond, Virginia, the current state capital
- History of the East Coast of the United States
- History of the Southern United States
- History of Virginia on stamps
- Newspapers in Virginia in the 18th century, List of
- Timeline of Virginia
- Virginia Conventions
- Charles H. Ambler and Festus P. Summers, West Virginia, the mountain state (1958) pp 48-52, 55
- "Archaeological evidence also indicates that Native Americans occupied the area as early as 6500 BC." "State Historical Highway Marker 'Pocahontas Island' To Be Dedicated in Petersburg", Petersburg, VA Official Website, Posted on: June 16, 2015, archived article accessed February 25, 2016
- Brown, Hutch (Summer 2000). "Wildland Burning by American Indians in Virginia". Fire Management Today. Washington, DC: U.S. Department of Agriculture, Forest Service. 60 (3): 32. An engraving after John White watercolor. Sparsely wooded field in background suggests the region's savanna.
- Virginia Indian Tribes, University of Richmond Archived March 9, 2005, at the Wayback Machine.
- c.f. Anishinaabe language: danakamigaa: "activity-grounds", i.e. "land of much events [for the People"
- Berrier Jr., Ralph (September 20, 2009). "The slaughter at Saltville". The Roanoke Times. Retrieved October 9, 2011.[dead link]
- "Virginia Memory: Virginia Chronology". Library of Virginia. Retrieved October 9, 2011.
- James O. Glanville (2004). Conquistadors at Saltville in 1567?: A Review of the Archeological and Documentary Evidence. Smithfield Review.
- "A" New Andalucia and a Way to the Orient: The American Southeast During the Sixteenth Century. LSU Press. 1 October 2004. pp. 182–184. ISBN 978-0-8071-3028-5. Retrieved 30 March 2013.
- Stephen Adams (2001), The best and worst country in the world: perspectives on the early Virginia landscape, University of Virginia Press, p. 61, ISBN 978-0-8139-2038-2
- Charles M. Hudson; Carmen Chaves Tesser (1994). The Forgotten Centuries: Indians and Europeans in the American South, 1521-1704. University of Georgia Press. p. 359. ISBN 978-0-8203-1654-3.
- Jerald T. Milanich (February 10, 2006). Laboring in the Fields of the Lord: Spanish Missions And Southeastern Indians. University Press of Florida. p. 92. ISBN 978-0-8130-2966-5. Retrieved June 30, 2012.
- Seth Mallios (August 28, 2006). The Deadly Politics of Giving: Exchange And Violence at Ajacan, Roanoke, And Jamestown. University of Alabama Press. pp. 39–43. ISBN 978-0-8173-5336-0. Retrieved June 30, 2012.
- Price, 11
- Thomas C. Parramore; Peter C. Stewart; Tommy L. Bogger (April 1, 2000). Norfolk: The First Four Centuries. University of Virginia Press. p. 12. ISBN 978-0-8139-1988-1. Retrieved March 18, 2012.
- MR Peter C Mancall (2007). The Atlantic World and Virginia, 1550-1624. UNC Press Books. pp. 517, 522. ISBN 978-0-8078-3159-5. Retrieved 17 February 2013.
- Three names from the Roanoke Colony are still in use, all based on Native American names. Stewart, George (1945). Names on the Land: A Historical Account of Place-Naming in the United States. New York: Random House. p. 22. ISBN 1-59017-273-6.
- Raleigh, History of the World: "For when some of my people asked the name of that country, one of the savages answered 'Win-gan-da-coa', which is as much as to say, 'You wear good clothes.'
- T. H. Breen, "Looking Out for Number One: Conflicting Cultural Values in Early Seventeenth-Century Virginia," South Atlantic Quarterly, Summer 1979, Vol. 78 Issue 3, pp. 342–360
- J. Frederick Fausz, "The 'Barbarous Massacre' Reconsidered: The Powhatan Uprising of 1622 and the Historians," Explorations in Ethnic Studies, vol 1 (Jan. 1978), 16–36
- Gleach p. 199
- John Esten Cooke, Virginia: A History of the People (1883) p. 205.
- Heinemann, Ronald L., et al., Old Dominion, New Commonwealth: a history of Virginia 1607-2007, U. Virginia Press 2007 ISBN 978-0-8139-2609-4, p.44-45
- Wilcomb E. Washburn, The Governor and the Rebel: A History of Bacon's Rebellion in Virginia (1957)
- Albert H. Tillson (1991). Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740-1789. UP of Kentucky. p. 20ff.
- Alan Taylor, American Colonies: The Settling of North America (2002) p 157.
- John E. Selby, The Revolution in Virginia, 1775-1783 (1988) p 24-25.
- Quoted in Nancy L. Struna, "The Formalizing of Sport and the Formation of an Elite: The Chesapeake Gentry, 1650-1720s." Journal of Sport History 13#3 (1986) p 219. online
- Struna, The Formalizing of Sport and the Formation of an Elite pp 212-16.
- Timothy H. Breen, "Horses and gentlemen: The cultural significance of gambling among the gentry of Virginia." William and Mary Quarterly (1977) 34#2 pp: 239-257. online
- Edmund Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) p 386
- Heinemann, Old Dominion, New Commonwealth (2007) 83–90
- Gene Wilhelm, Jr., "Folk Culture History of the Blue Ridge Mountains" Appalachian Journal (1975) 2#3 in JSTOR
- Delma R. Carpenter, "The Route Followed by Governor Spotswood in 1716 across the Blue Ridge Mountains." Virginia Magazine of History and Biography (1965): 405-412. in JSTOR
- Rob Sherwood, "Germanna's Treasure Trove of History: A Journey of Discovery." Inquiry 13.1 (2008): 45-55. online
- "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). Virginiadot.org. Retrieved 2015-04-16.
- "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). 3chopt.com. Retrieved 2015-04-16.
- Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" online
- Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" http://www.encyclopediavirginia.org/Backcountry_Frontier_of_Colonial_Virginia#start_entry
- http://www.virginiaplaces.org/settleland/fairfaxgrant.html Once colonial settlement moved upstream of the Fall Line into the Piedmont, the dispute over the inland edge of the Northern Neck grant became an issue. Settlers seeking clear title had to know whether to file paperwork and pay fees to the colonial government in Williamsburg or the land office of the Fairfax family. If the colony could extinguish the Northern Neck grant somehow, revenues would flow to Williamsburg rather than to Leeds Castle."
- http://www.historichampshire.org/research/searching1.htm "in mid-March, 1735, Lord Fairfax arrived in Virginia on board the Glasgow on his first inspection trip to America. The trip lasted over two years during which time Fairfax reasserted his claim to the Proprietary and made arrangements for the survey of the boundaries."
- http://www.mountvernon.org/digital-encyclopedia/article/lord-fairfax/ "in 1748 hired, among others, the sixteen-year old Washington to survey the Northern Neck."
- George Washington's elder half brother Lawrence Washington (1718-1752) was married to Anne (1728-1761) a daughter of Col. William Fairfax of Belvoir—a land agent and cousin of Lord Thomas Fairfax. Anne's brother, George William Fairfax, was married to Sally Fairfax (nee Cary).
- Historical Statement Relative to the Town of Winchester the Virginia -- House of Burgesses granted the fourth city charter in Virginia to 'Winchester' as Frederick Town was renamed.
- MacCorkle, William Alexander. "The historical and other relations of Pittsburgh and the Virginias". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013.
- Andrew Arnold Lambing; et al. "Allegheny County: its early history and subsequent development: from the earliest period till 1790". Historic Pittsburgh Text Collection. University of Pittsburgh. Retrieved 12 September 2013.
- "Addresses delivered at the celebration of the one hundred and fiftieth anniversary of the Battle of Bushy Run, August 5th and 6th, 1913". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013.
- O'Meara, p. 48
- Anderson (2000), pp. 42–43
- Royal Proclamation I
- Gordon S. Wood, The American Revolution, A History. New York, Modern Library, 2002 ISBN 0-8129-7041-1, p.22
- Edward L. Bond and Joan R. Gundersen, The Episcopal Church in Virginia, 1607–2007 (2007)
- Rountree p. 161–162, 168–170, 175
- Edward L. Bond, "Anglican theology and devotion in James Blair's Virginia, 1685–1743," Virginia Magazine of History and Biography, (1996) 104#3 pp 313–40
- Charles Woodmason, The Carolina Backcountry on the Eve of the Revolution: The Journal and Other Writings of Charles Woodmason, Anglican Itinerant ed. by Richard J. Hooker (1969)
- David Brion Davis (1986). Slavery in the Colonial Chesapeake. Colonial Williamsburg. p. 28.
- Cynthia Lynn Lyerly (1998). Methodism and the Southern Mind, 1770-1810. Oxford UP. p. 119ff.
- John A. Ragosta, "Fighting for Freedom: Virginia Dissenters' Struggle for Religious Liberty during the American Revolution," Virginia Magazine of History and Biography, (2008) 116#3 pp. 226–261
- Rhys Isaac, "Evangelical Revolt: The Nature of the Baptists' Challenge to the Traditional Order in Virginia, 1765 To 1775," William and Mary Quarterly (1974) 31#3 pp 345–368 in JSTOR
- Pauline Maier, Ratification: The People Debate the Constitution, 1787–1788 (2010) pp. 235–319
- Peter Kolchin, American Slavery: 1619–1877, New York: Hill and Wang, 1994, p. 73
- Kolchin, American Slavery, p. 81
- Andrew Levy, The First Emancipator: The Forgotten Story of Robert Carter, the Founding Father who freed his slaves, New York: Random House, 2005 (ISBN 0-375-50865-1)
- Scott Nesbit, "Scales Intimate and Sprawling: Slavery, Emancipation, and the Geography of Marriage in Virginia", Southern Spaces, July 19, 2011. http://southernspaces.org/2011/scales-intimate-and-sprawling-slavery-emancipation-and-geography-marriage-virginia.
- Albert J. Raboteau, Slave Religion: The 'Invisible Institution' in the Antebellum South, New York: Oxford University Press, 2004, p. 137, accessed December 27, 2008
- "Soil exhaustion in the Tidewater became chronic, and the Piedmont was "worn out, washed and gullied." Conditions were better in the Valley of Virginia, where wheat rather than tobacco was dominant, but even there people saw a brighter future outside Virginia." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- "In all, perhaps one million Virginians left the commonwealth between the Revolution and the Civil War." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- "Virginia fell from first to seventh place in population, and its number of congressmen dropped from twenty-three to eleven." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners"Although this mass exodus of Virginians caused the state to slip into a secondary role both politically and economically, these westward-bound settlers spread their culture, laws, political ideas, and labor system across America."
- "Washington Iron Furnace National Register Nomination" (PDF). Virginia Department of Historic Resources. Retrieved March 23, 2011.
- S. Sydney Bradford, "The Negro Ironworker in Ante Bellum Virginia," Journal of Southern History, May 1959, Vol. 25 Issue 2, pp. 194–206; Ronald L. Lewis, "The Use and Extent of Slave Labor in the Virginia Iron Industry: The Antebellum Era," West Virginia History, Jan 1977, Vol. 38 Issue 2, pp. 141–156
- For a comparison of Virginia and New Jersey see John Bezis-Selfa, "A Tale of Two Ironworks: Slavery, Free Labor, Work, and Resistance in the Early Republic," William & Mary Quarterly, Oct 1999, Vol. 56 Issue 4, pp. 677–700
- see "Libby Prison", Encyclopedia Virginia, accessed 21 April 2012
- Aaron Sheehan-Dean, "Everyman's War: Confederate Enlistment in Civil War Virginia," Civil War History, March 2004, Vol. 50 Issue 1, pp. 5–26
- The U.S Constitution requires permission of the old state for a new state to form. David R. Zimring, "'Secession in Favor of the Constitution': How West Virginia Justified Separate Statehood during the Civil War," West Virginia History, (2009) 3#2 pp. 23–51
- Richard O. Curry, A House Divided, Statehood Politics & the Copperhead Movement in West Virginia, (1964), pp. 141–147.
- Curry, A House Divided, pg. 73.
- Curry, A House Divided, pgs. 141–152.
- Charles H. Ambler and Festus P. Summers, West Virginia: The Mountain State ch 15–20
- Otis K. Rice, West Virginia: A History (1985) ch 12–14
- Kenneth C. Martis, The Historical Atlas of the Congresses of the Confederate States of America 1861-1865 (1994) p. 43-53.
- The main scholarly histories are Hamilton James Eckenrode, The Political History of Virginia during the Reconstruction (1904); Richard Lowe, Republicans and Reconstruction in Virginia, 1856–70 (1991); and Jack P. Maddex, Jr., The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970). See also Heinemann et al., New Commonwealth (2007) ch. 11
- Mary Farmer-Kaiser, Freedwomen and the Freedmen's Bureau: Race, Gender, and Public Policy in the Age of Emancipation, (Fordham U.P., 2010), quotes pp. 51, 13
- Richard Lowe, "Another Look at Reconstruction in Virginia," Civil War History, March 1986, Vol. 32 Issue 1, pp. 56–76
- James L. McDonough, "John Schofield as Military Director of Reconstruction in Virginia.," Civil War History, Sept 1969, Vol. 15#3, pp. 237–256
- Heinemann, et al. Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007) p 248.
- Eric Foner, Politics and Ideology in the Age of the Civil War (1980) p 146
- James E. Bond, No Easy Walk to Freedom: Reconstruction and the Ratification of the Fourteenth Amendment (Praeger, 1997) p. 156.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 5
- The Carpetbaggers were Northern whites who had moved to Virginia after the war. Heinemann et al., New Commonwealth (2007) p. 248
- Note: In order to gain public education, black delegates had to accept segregation in the schools.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 6
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 7
- Walker had 119,535 votes and Wells 101,204. The new Underwood Constitution was approved overwhelmingly, but the disfranchisement clauses were rejected by 3:2 ratios. The new legislature was controlled by the Conservative Party, which soon absorbed the "True Republicans". Eckenrode, The Political History of Virginia during the Reconstruction, p. 411
- Ku Klux Klan chapters were formed in Virginia in the early years after the war, but they played a negligible role in state politics and soon vanished. Heinemann et al., New Commonwealth (2007) p. 249
- Nelson M. Blake, William Mahone of Virginia: Soldier and Political Insurgent (1935)
- Richard Lowe, Republicans and Reconstruction in Virginia, 1856-70 (1991) p 119
- Henry C. Ferrell, Claude A. Swanson of Virginia: a political biography (1985)
- George Harrison Gilliam, "Making Virginia Progressive," Virginia Magazine of History and Biography, 1999, Vol. 107 Issue 2, pp. 189–222
- Lex Renda, "The Advent of Agricultural Progressivism in Virginia," Virginia Magazine of History and Biography, 1988, Vol. 96 Issue 1, pp. 55–82
- Lloyd C. Taylor, Jr. "Lila Meade Valentine: The FFV as Reformer," Virginia Magazine of History and Biography, 1962, Vol. 70 Issue 4, pp. 471–487
- Sara Hunter Graham, "Woman Suffrage In Virginia: The Equal Suffrage League and Pressure-Group Politics, 1909–1920," Virginia Magazine of History and Biography, 1993, Vol. 101 Issue 2, pp. 227–250
- Michael Dennis, "Reforming the 'academical village,'" Virginia Magazine of History and Biography, 1997, Vol. 105 Issue 1, pp. 53–86
- James M. Lindgren, "Virginia Needs Living Heroes": Historic Preservation in the Progressive Era," Public Historian, Jan 1991, Vol. 13 Issue 1, pp. 9–24
- "U-Boat Sinks Schooner Without Any Warning". New York Times. August 17, 1918. Retrieved July 28, 2011.
- "RAIDING U-BOAT SINKS 2 NEUTRALS OFF VIRGINIA COAST". New York Times. June 17, 1918. Retrieved July 28, 2011.
- Arlington Connection, Michael Lee Pope, October 14–20, 2009, Alcohol as Budget Savior, page 3
- Morgan Kousser, The Shaping of Southern Politics (1974) p 181; Wallenstein, Cradle of America (2007) p 283–4
- V.O. Key, Jr., Southern Politics (1949) p 32
- Joe Freitus, Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014)
- Charles Johnson, "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- "A Brief History of U.S. Fleet Forces Command". U.S. Fleet Forces Command, USN. Retrieved March 17, 2011.
- "Langley's Role in Project Mercury". NASA Langley Research Center. Retrieved March 20, 2011.
- "Giant Leaps Began With "Little Joe"". NASA Langley Research Center. Retrieved March 20, 2011.
- "Viking: Trialblazer For All Mars Research". NASA Langley Research Center. Retrieved March 20, 2011.
- Benjamin Muse, Virginia's Massive Resistance (1961)
- Wallenstein, Peter (Fall 1997). "Not Fast, But First: The Desegregation of Virginia Tech". VT Magazine. Virginia Tech. Retrieved 2008-04-12. External link in
- Donnelly, Sally B. "D.C. Dotcom." Time August 8, 2000. http://www.time.com/time/magazine/article/0,9171,52073-2,00.html
- Freed, Benjamin (14 September 2016). "70 Percent of the World's Web Traffic Flows Through Loudoun County". Washingtonian.
- LIFE: Mark Warner becomes first U.S. politician to campaign in a video game
- Virginia leads the way
- Virginia First State to Require Internet Safety Lessons
- "Notable dates in Virginia history". Virginia Historical Society.
- Benjamin Vincent (1910), "Virginia", Haydn's Dictionary of Dates (25th ed.), London: Ward, Lock & Co. – via Hathi Trust
- Dabney, Virginius. Virginia: The New Dominion (1971)
- Heinemann, Ronald L., John G. Kolp, Anthony S. Parent Jr., and William G. Shade, Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007). ISBN 978-0-8139-2609-4.
- Kierner, Cynthia A., and Sandra Gioia Treadway. Virginia Women: Their Lives and Times, vol. 1. (University of Georgia Press, 2015) x, 378 pp
- Morse, J. (1797). "Virginia". The American Gazetteer. Boston, Massachusetts: At the presses of S. Hall, and Thomas & Andrews.
- Rubin, Louis D. Virginia: A Bicentennial History. States and the Nation Series. (1977), popular
- Salmon, Emily J., and Edward D.C. Campbell, Jr., eds. The Hornbook of Virginia history: A Ready-Reference Guide to the Old Dominion's People, Places, and Past 4th edition. (1994)
- Wallenstein, Peter. Cradle of America: Four Centuries of Virginia History (2007). ISBN 978-0-7006-1507-0.
- WPA. Virginia: A Guide to the Old Dominion (1940) famous guide to every locality; strong on society, economy and culture online edition
- Younger, Edward, and James Tice Moore, eds. The Governors of Virginia, 1860–1978 (1982)
- Tarter, Brent, "Making History in Virginia," Virginia Magazine of History and Biography Volume: 115. Issue: 1. 2007. pp. 3+. online edition
Prehistoric and Colonial
- Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online
- Appelbaum, Robert, and John Wood Sweet, eds. Envisioning an English empire: Jamestown and the making of the North Atlantic world (U of Pennsylvania Press, 2011)
- Billings, Warren M., John E. Selby, and Thad W, Tate. Colonial Virginia: A History (1986)
- Bond, Edward L. Damned Souls in the Tobacco Colony: Religion in Seventeenth-Century Virginia (2000),
- Breen T. H. Puritans and Adventurers: Change and Persistence in Early America (1980). 4 chapters on colonial social history online edition
- Breen, T. H. Tobacco Culture: The Mentality of the Great Tidewater Planters on the Eve of Revolution (1985)
- Breen, T. H., and Stephen D. Innes. "Myne Owne Ground": Race and Freedom on Virginia's Eastern Shore, 1640–1676 (1980)
- Brown, Kathleen M. Good Wives, Nasty Wenches, and Anxious Patriarchs: Gender, Race, and Power in Colonial Virginia (1996) excerpt and text search
- Byrd, William. The Secret Diary of William Byrd of Westover, 1709–1712 (1941) ed by Louis B. Wright and Marion Tinling online edition; famous primary source; very candid about his priivate life
- Bruce, Philip Alexander. Institutional History of Virginia in the Seventeenth Century: An Inquiry into the Religious, Moral, Educational, Legal, Military, and Political Condition of the People, Based on Original and Contemporaneous Records (1910) online edition
- Coombs, John C., "The Phases of Conversion: A New Chronology for the Rise of Slavery in Early Virginia," William and Mary Quarterly, 68 (July 2011), 332–60.
- Davis, Richard Beale. Intellectual Life in the Colonial South, 1585-1763 * 3 vol 1978), detailed coverage of Virginia
- Freeman, Douglas Southall; George Washington: A Biography Volume: 1–7. (1948). Pulitzer Prize. vol 1 online
- Gleach; Frederic W. Powhatan's World and Colonial Virginia: A Conflict of Cultures (1997).
- Isaac, Rhys. Landon Carter's Uneasy Kingdom: Revolution and Rebellion on a Virginia Plantation (2004)]
- Isaac, Rhys. The Transformation of Virginia, 1740–1790 (1982, 1999) Pulitzer Prize winner, dealing with religion and morality online review
- Kolp, John Gilman. Gentlemen and Freeholders: Electoral Politics in Colonial Virginia (Johns Hopkins U.P. 1998)
- Menard, Russell R. "The Tobacco Industry in the Chesapeake Colonies, 1617–1730: An Interpretation." Research In Economic History 1980 5: 109–177. 0363–3268 the standard scholarly study
- Mook, Maurice A. "The Aboriginal Population of Tidewater Virginia." American Anthropologist (1944) 46#2 pp: 193-208. online
- Morgan, Edmund S. Virginians at Home: Family Life in the Eighteenth Century (1952). online edition
- Morgan, Edmund S. "Slavery and Freedom: The American Paradox." Journal of American History 1972 59(1): 5–29 in JSTOR
- Morgan, Edmund S. American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) online edition highly influential study
- Nelson, John A Blessed Company: Parishes, Parsons, and Parishioners in Anglican Virginia, 1690–1776 (2001)
- Price, David A. Love and Hate in Jamestown: John Smith, Pocahontas, and the Start of a New Nation (2005)
- Rasmussen, William M.S. and Robert S. Tilton. Old Virginia: The Pursuit of a Pastoral Ideal (2003)
- Roeber, A. G. Faithful Magistrates and Republican Lawyers: Creators of Virginia Legal Culture, 1680–1810 (1981)
- Rountree, Helen C. Pocahontas, Powhatan, Opechancanough: Three Indian Lives Changed by Jamestown (University of Virginia press, 2005), early Virginia history from an Indian perspective by a scholar
- Rutman, Darrett B., and Anita H. Rutman. A Place in Time: Middlesex County, Virginia, 1650–1750 (1984), new social history
- Sheehan, Bernard. Savagism and civility: Indians and Englishmen in colonial Virginia (Cambridge UP, 1980.)
- Wertenbaker, Thomas J. The Shaping of Colonial Virginia, comprising Patrician and Plebeian in Virginia (1910) full text online; Virginia under the Stuarts (1914) full text online; and The Planters of Colonial Virginia (1922) full text online; well written but outdated
- Wright, Louis B. The First Gentlemen of Virginia: Intellectual Qualities of the Early Colonial Ruling Class (1964)
1776 to 1850
- Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America (2004)
- Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online
- Beeman, Richard R. The Old Dominion and the New Nation, 1788–1801 (1972)
- Dill, Alonzo Thomas. "Sectional Conflict in Colonial Virginia," Virginia Magazine of History and Biography 87 (1979): 300–315.
- Lebsock, Suzanne D. A Share of Honor: Virginia Women, 1600–1945 (1984)
- Link, William A. Roots of Secession: Slavery and Politics in Antebellum Virginia (2007) excerpt and text search
- Majewski, John D. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War (2006) excerpt and text search
- Risjord, Norman K. Chesapeake Politics, 1781–1800 (1978). in-depth coverage of Virginia, Maryland and North Carolina online edition
- Selby, John E. The Revolution in Virginia, 1775–1783 (1988)
- Shade, William G. Democratizing the Old Dominion: Virginia and the Second Party System 1824–1861 (1996)
- Taylor, Alan. The Internal Enemy: Slavery and War in Virginia, 1772-1832 (2014). 624 pp online review
- Tillson, Jr. Albert H. Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740–1789 (1991),
- Varon; Elizabeth R. We Mean to Be Counted: White Women and Politics in Antebellum Virginia (1998)
- Virginia State Dept. of Education. The Road to Independence: Virginia 1763–1783 online edition; 80pp; with student projects
1850 to 1870
- Blair, William. Virginia's Private War: Feeding Body and Soul in the Confederacy, 1861–1865 (1998) online edition
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Eckenrode, Hamilton James. The political history of Virginia during the Reconstruction, (1904) online edition
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Lankford, Nelson. Richmond Burning: The Last Days of the Confederate Capital (2002)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Lowe, Richard. Republicans and Reconstruction in Virginia, 1856–70 (1991)
- Maddex, Jr., Jack P. The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970).
- Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War (2000)
- Noe, Kenneth W. Southwest Virginia's Railroad: Modernization and the Sectional Crisis (1994)
- Robertson, James I. Civil War Virginia: Battleground for a Nation (1993) 197 pages; excerpt and text search
- Shanks, Henry T. The Secession Movement in Virginia, 1847–1861 (1934) online edition
- Sheehan-Dean, Aaron Charles. Why Confederates fought: family and nation in Civil War Virginia (2007) 291 pages excerpt and text search
- Simpson, Craig M. A Good Southerner: The Life of Henry A. Wise of Virginia (1985), wide-ranging political history
- Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) excerpt and text search
- Wills, Brian Steel. The war hits home: the Civil War in southeastern Virginia (2001) 345 pages; excerpt and text search
- Brundage, W. Fitzhugh. Lynching in the New South: Georgia and Virginia, 1880–1930 (1993)
- Buni, Andrew. The Negro in Virginia Politics, 1902–1965 (1967)
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Ferrell, Henry C., Jr. Claude A. Swanson of Virginia: A Political Biography (1985) early 20th century
- Freitus, Joe. Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014) online review
- Gilliam, George H. "Making Virginia Progressive: Courts and Parties, Railroads and Regulators, 1890–1910." Virginia Magazine of History and Biography 107 (Spring 1999): 189–222.
- Heinemann, Ronald L. Depression and the New Deal in Virginia: The Enduring Dominion (1983)
- Heinemann, Ronald L. Harry Byrd of Virginia (1996)
- Heinemann, Ronald L. "Virginia in the Twentieth Century: Recent Interpretations." Virginia Magazine of History and Biography 94 (April 1986): 131–60.
- Hunter, Robert F. "Virginia and the New Deal," in John Braeman et al. eds. The New Deal: Volume Two – the State and Local Levels (1975) pp. 103–36
- Johnson, Charles. "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Key, V. O., Jr. Southern Politics in State and Nation (1949), important chapter on Virginia in the 1940s
- Lassiter, Matthew D., and Andrew B. Lewis, eds. The Moderates' Dilemma: Massive Resistance to School Desegregation in Virginia (1998)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Link, William A. A Hard Country and a Lonely Place: Schooling, Society, and Reform in Rural Virginia, 1870–1920 (1986)
- Martin-Perdue, Nancy J., and Charles L. Perdue Jr., eds. Talk about Trouble: A New Deal Portrait of Virginians in the Great Depression (1996)
- Moger, Allen W. Virginia: Bourbonism to Byrd, 1870–1925 (1968)
- Muse, Benjamin. Virginia's Massive Resistance (1961)
- Pulley, Raymond H. Old Virginia Restored: An Interpretation of the Progressive Impulse, 1870–1930 (1968)
- Shiftlett, Crandall. Patronage and Poverty in the Tobacco South: Louisa County, Virginia, 1860–1900 (1982), new social history
- Smith, J. Douglas. Managing White Supremacy: Race, Politics, and Citizenship in Jim Crow Virginia (2002)
- Sweeney, James R. "Rum, Romanism, and Virginia Democrats: The Party Leaders and the Campaign of 1928" Virginia Magazine of History and Biography 90 (October 1982): 403–31.
- Wilkinson, J. Harvie, III. Harry Byrd and the Changing Face of Virginia Politics, 1945–1966 (1968)
- Wynes, Charles E. Race Relations in Virginia, 1870–1902 (1961)
Environment, geography, locales
- Adams, Stephen. The Best and Worst Country in the World: Perspectives on the Early Virginia Landscape (2002) excerpt and text search
- Gottmann, Jean. Virginia at mid-century (1955), by a leading geographer
- Gottmann, Jean. Virginia in Our Century (1969)
- Kirby, Jack Temple. "Virginia'S Environmental History: A Prospectus," Virginia Magazine of History and Biography, 1991, Vol. 99 Issue 4, pp. 449–488
- *Parramore, Thomas C., with Peter C. Stewart and Tommy L. Bogger. Norfolk: The First Four Centuries (1994)
- Terwilliger, Karen. Virginia's Endangered Species (2001), esp. ch 1
- Sawyer, Roy T. America's Wetland: An Environmental and Cultural History of Tidewater Virginia and North Carolina (University of Virginia Press; 2010) 248 pages; traces the human impact on the ecosystem of the Tidewater region.
- Jefferson, Thomas. Notes on the State of Virginia
- Duke, Maurice, and Daniel P. Jordan, eds. A Richmond Reader, 1733–1983 (1983)
- Eisenberg, Ralph. Virginia Votes, 1924–1968 (1971), all statistics
- Encyclopedia Virginia
- Virginia Historical Society short history of state, with teacher guide
- Virginia Memory, digital collections and online classroom of the Library of Virginia
- How Counties Got Started in Virginia
- Union or Secession: Virginians Decide
- Virginia and the Civil War
- Civil War timeline
- Boston Public Library, Map Center. Maps of Virginia, various dates. |
NASA's Cassini spacecraft, which made its closest approach to Jupiter early today, is providing ways to make invisible features visible, to track daily changes in some of the planet's most visible storms, and to hear the patterns in natural radio emissions near the edge of Jupiter's magnetic environment.
In collaboration with NASA's Galileo spacecraft, which has been orbiting Jupiter since 1995, Cassini is also beginning to provide new insight in how the solar wind of particles speeding away from the Sun affects a huge magnetic region surrounding Jupiter.
Scientists using instruments on both Cassini and Galileo gave a preview today at NASA's Jet Propulsion Laboratory, Pasadena, Calif., of what they are beginning to learn in the joint studies, which will continue for another three months.
Large storms on Jupiter, which can be larger than Earth and last for centuries, gain energy from swallowing smaller storms, preliminary analysis of Jupiter movies from Cassini spacecraft suggest. The smaller storms pull their energy from lower depths, according to information collected by Galileo.
The Cassini spacecraft, which made its closest approach to Jupiter today at 2:12 a.m. Pacific Standard Time, has taken pictures of thunderstorms on Jupiter. As small storms pass each other, they can be ripped apart, or merged together. This shows that the small features in Jupiter's atmosphere harvest the energy from below the cloud surface, and the larger storms encompass the small ones, just as a big fish eats smaller ones for energy, said Dr. Andrew Ingersoll of the California Institute of Technology.
He said a better understanding of storms on Jupiter helps in understanding Earth's atmosphere, too. "The weather is different on Jupiter. You have a 300-year-old storm. We'd like to know why Jupiter's weather is so stable, and Earth's is so transient," he said.
Dr. Carolyn Porco of the University of Arizona presented planetwide movies of cloud movements on Jupiter, a sampling of the Cassini camera results that scientists will be examining in coming months.
"The camera has performed beyond our wildest imaginings - - and that's saying something, because we've been imagining this for a decade now," she said.
Both Cassini and Galileo have recently returned evidence of the variability in size of Jupiter's magnetosphere, a bubble of charged particles trapped within Jupiter's magnetic field. The bubble is so big that if it were visible to the eye, it would appear bigger to viewers on Earth than our own Moon does, despite its much greater distance. While Galileo was moving toward Jupiter this fall, it passed the magnetosphere boundary, but then the boundary moved inward toward Jupiter even faster than the spacecraft was moving, temporarily putting Galileo back outside the magnetosphere, said Dr. William Kurth of the University of Iowa.
Kurth played a sound recording based on natural radio emissions created by the energy of the area where the solar wind hits Jupiter's magnetosphere. The emissions were detected on an instrument onboard Cassini, which encountered the boundary this week much farther out from Jupiter than expected.
Another instument on Cassini is creating images never before possible of the entire magnetosphere. Dr. Stamatios (Tom) Krimigis of Johns Hopkins University's Applied Physics Laboratory presented one at the briefing that Cassini took this week, showing some features of the structure within the magnetosphere. Other Cassini measurements show that some sulfur and oxygen spewed from volcanoes on Jupiter's moon Io are distributed much farther from the planet than the extent of the magnetosphere, Krimigis said.
The evidence shows there is big nebula of material surrounding Jupiter, originating from the volcanoes on Io, he said.
Cassini passed about 9.7 million kilometers (6 million miles) from Jupiter today in order to use Jupiter's gravity for a boost to take Cassini to its main destination, Saturn. It will reach Saturn in 2004.
The images and sounds released at the briefing are available online from JPL at http://www.jpl.nasa.gov .
Cassini is a cooperative effort of NASA, the European Space Agency and the Italian Space Agency. JPL, a division of the California Institute of Technology in Pasadena, manages Galileo and Cassini for the NASA Office of Space Science, Washington, D.C.
Cite This Page: |
Definition: Asthma is a common long term inflammatory disease of the airways of the lungs. It is characterized by variable and recurring symptoms, reversible airflow obstruction, and bronchospasm. Symptoms include episodes of wheezing, coughing, chest tightness, and shortness of breath. These episodes may occur a few times a day or a few times per week. Depending on the person they may become worse at night or with exercise.
Asthma is thought to be caused by a combination of genetic and environmental factors. Environmental factors include exposure to air pollution and allergens. Other potential triggers include medications such as aspirin and beta blockers. Diagnosis is usually based on the pattern of symptoms, response to therapy over time, and spirometry. Asthma is classified according to the frequency of symptoms, forced expiratory volume in one second (FEV1), and peak expiratory flow rate. It may also be classified as atopic or non-atopic where atopy refers to a predisposition toward developing a type 1 hypersensitivity reaction…..CLICK & SEE
There is no cure for asthma. Symptoms can be prevented by avoiding triggers, such as allergens and irritants, and by the use of inhaled corticosteroids. Long-acting beta agonists (LABA) or antileukotriene agents may be used in addition to inhaled corticosteroids if asthma symptoms remain uncontrolled. Treatment of rapidly worsening symptoms is usually with an inhaled short-acting beta-2 agonist such as salbutamol and corticosteroids taken by mouth. In very severe cases, interavenous corticosteroids, magnesium sulfate, and hospitalization may be required.
Asthma symptoms vary from person to person. One may have infrequent asthma attacks, has symptoms only at certain times — such as when exercising — or have symptoms all the time.
Asthma signs and symptoms include:
*Shortness of breath
*Chest tightness or pain
*Trouble sleeping caused by shortness of breath, coughing or wheezing
*A whistling or wheezing sound when exhaling (wheezing is a common sign of asthma in children)
*Coughing or wheezing attacks that are worsened by a respiratory virus, such as a cold or the flu
Signs that your asthma is probably worsening include:
*Asthma signs and symptoms that are more frequent and bothersome
*Increasing difficulty breathing (measurable with a peak flow meter, a device used to check how well your lungs are working)
*The need to use a quick-relief inhaler more often
For some people, asthma signs and symptoms flare up in certain situations:
*Exercise-induced asthma, which may be worse when the air is cold and dry
*Occupational asthma, triggered by workplace irritants such as chemical fumes, gases or dust
*Allergy-induced asthma, triggered by particular allergens, such as pet dander, cockroaches or pollen.
Asthma is caused by a combination of complex and incompletely understood environmental and genetic interactions. These factors influence both its severity and its responsiveness to treatment. It is believed that the recent increased rates of asthma are due to changing epigenetics (heritable factors other than those related to the DNA sequence) and a changing living environment…..CLICK & SEE
Exposure to various irritants and substances that trigger allergies (allergens) can trigger signs and symptoms of asthma. Asthma triggers are different from person to person and can include:
*Airborne allergens, such as pollen, animal dander, mold, cockroaches and dust mites
*Respiratory infections, such as the common cold
*Physical activity (exercise-induced asthma)
*Air pollutants and irritants, such as smoke
*Certain medications, including beta blockers, aspirin, ibuprofen (Advil, Motrin IB, others) and naproxen (Aleve)
*Strong emotions and stress
*Sulfites and preservatives added to some types of foods and beverages, including shrimp, dried fruit, processed potatoes, beer and wine
*Gastroesophageal reflux disease (GERD), a condition in which stomach acids back up into your throat.
While asthma is a well recognized condition, there is not one universal agreed upon definition. It is defined by the Global Initiative for Asthma as “a chronic inflammatory disorder of the airways in which many cells and cellular elements play a role. The chronic inflammation is associated with airway hyper-responsiveness that leads to recurrent episodes of wheezing, breathlessness, chest tightness and coughing particularly at night or in the early morning. These episodes are usually associated with widespread but variable airflow obstruction within the lung that is often reversible either spontaneously or with treatment”.
There is currently no precise test with the diagnosis typically based on the pattern of symptoms and response to therapy over time. A diagnosis of asthma should be suspected if there is a history of: recurrent wheezing, coughing or difficulty breathing and these symptoms occur or worsen due to exercise, viral infections, allergens or air pollution.
To rule out other possible conditions — such as a respiratory infection or chronic obstructive pulmonary disease (COPD) — your doctor will do a physical exam and ask you questions about your signs and symptoms and about any other health problems.
Tests to measure lung function
One may also be given lung (pulmonary) function tests to determine how much air moves in and out as you breathe. These tests may include:
*Spirometry. This test estimates the narrowing of your bronchial tubes by checking how much air you can exhale after a deep breath and how fast you can breathe out.
*Peak flow. A peak flow meter is a simple device that measures how hard you can breathe out. Lower than usual peak flow readings are a sign your lungs may not be working as well and that your asthma may be getting worse. Your doctor will give you instructions on how to track and deal with low peak flow readings.
Lung function tests often are done before and after taking a medication called a bronchodilator (brong-koh-DIE-lay-tur), such as albuterol, to open your airways. If your lung function improves with use of a bronchodilator, it’s likely you have asthma.
Other additional tests:
Other tests to diagnose asthma include:
*Methacholine challenge. Methacholine is a known asthma trigger that, when inhaled, will cause mild constriction of your airways. If you react to the methacholine, you likely have asthma. This test may be used even if your initial lung function test is normal.
*Nitric oxide test. This test, though not widely available, measures the amount of the gas, nitric oxide, that you have in your breath. When your airways are inflamed — a sign of asthma — you may have higher than normal nitric oxide levels.
*Imagingtest: test:A chest X-ray and high-resolution computerized tomography (CT) scan of your lungs and nose cavities (sinuses) can identify any structural abnormalities or diseases (such as infection) that can cause or aggravate breathing problems.
*Allergy testing. : This can be performed by a skin test or blood test. Allergy tests can identify allergy to pets, dust, mold and pollen. If important allergy triggers are identified, this can lead to a recommendation for allergen immunotherapy.
*Sputum eosinophils. This test looks for certain white blood cells (eosinophils) in the mixture of saliva and mucus (sputum) you discharge during coughing. Eosinophils are present when symptoms develop and become visible when stained with a rose-colored dye (eosin).
*Provocative testing for exercise and cold-induced asthma. In these tests, your doctor measures your airway obstruction before and after you perform vigorous physical activity or take several breaths of cold air.
A number of factors are thought to increase your chances of developing asthma. These include:
*Having a blood relative (such as a parent or sibling) with asthma
*Having another allergic condition, such as atopic dermatitis or allergic rhinitis (hay fever)
*Being a smoker
*Exposure to secondhand smoke
*Exposure to exhaust fumes or other types of pollution
*Exposure to occupational triggers, such as chemicals used in farming, hairdressing and manufacturing
Asthma complications include:
*Signs and symptoms that interfere with sleep, work or recreational activities
*Sick days from work or school during asthma flare-ups
*Permanent narrowing of the bronchial tubes (airway remodeling) that affects how well you can breathe
*Emergency room visits and hospitalizations for severe asthma attacks
*Side effects from long-term use of some medications used to stabilize severe asthma
*Proper treatment makes a big difference in preventing both short-term and long-term complications caused by asthma.
While there is no cure for asthma, symptoms can typically be improved. A specific, customized plan for proactively monitoring and managing symptoms should be created. This plan should include the reduction of exposure to allergens, testing to assess the severity of symptoms, and the usage of medications. The treatment plan should be written down and advise adjustments to treatment according to changes in symptoms.
The most effective treatment for asthma is identifying triggers, such as cigarette smoke, pets, or aspirin, and eliminating exposure to them. If trigger avoidance is insufficient, the use of medication is recommended. Pharmaceutical drugs are selected based on, among other things, the severity of illness and the frequency of symptoms. Specific medications for asthma are broadly classified into fast-acting and long-acting categories.
Bronchodilators are recommended for short-term relief of symptoms. In those with occasional attacks, no other medication is needed. If mild persistent disease is present (more than two attacks a week), low-dose inhaled corticosteroids or alternatively, an oral leukotriene antagonist or a mast cell stabilizer is recommended. For those who have daily attacks, a higher dose of inhaled corticosteroids is used. In a moderate or severe exacerbation, oral corticosteroids are added to these treatments.
Avoidance of triggers is a key component of improving control and preventing attacks. The most common triggers include allergens, smoke (tobacco and other), air pollution, non selective beta-blockers, and sulfite-containing foods. Cigarette smoking and second-hand smoke (passive smoke) may reduce the effectiveness of medications such as corticosteroids. Laws that limit smoking decrease the number of people hospitalized for asthma. Dust mite control measures, including air filtration, chemicals to kill mites, vacuuming, mattress covers and others methods had no effect on asthma symptoms. Overall, exercise is beneficial in people with stable asthma.
Medications used to treat asthma are divided into two general classes: quick-relief medications used to treat acute symptoms; and long-term control medications used to prevent further exacerbation.
*Short-acting beta2-adrenoceptor agonists (SABA), such as salbutamol (albuterol USAN) are the first line treatment for asthma symptoms. They are recommended before exercise in those with exercise induced symptoms.
*Anticholinergic medications, such as ipratropium bromide, provide additional benefit when used in combination with SABA in those with moderate or severe symptoms. Anticholinergic bronchodilators can also be used if a person cannot tolerate a SABA. If a child requires admission to hospital additional ipratropium does not appear to help over a SABA.
*Older, less selective adrenergic agonists, such as inhaled epinephrine, have similar efficacy to SABAs. They are however not recommended due to concerns regarding excessive cardiac stimulation.
Fluticasone propionate metered dose inhaler commonly used for long-term control.
*Corticosteroids are generally considered the most effective treatment available for long-term control. Inhaled forms such as beclomethasone are usually used except in the case of severe persistent disease, in which oral corticosteroids may be needed. It is usually recommended that inhaled formulations be used once or twice daily, depending on the severity of symptoms.
*Long-acting beta-adrenoceptor agonists (LABA) such as salmeterol and formoterol can improve asthma control, at least in adults, when given in combination with inhaled corticosteroids. In children this benefit is uncertain. When used without steroids they increase the risk of severe side-effects and even with corticosteroids they may slightly increase the risk.
*Leukotriene receptor antagonists (such as montelukast and zafirlukast) may be used in addition to inhaled corticosteroids, typically also in conjunction with a LABA. Evidence is insufficient to support use in acute exacerbations. In children they appear to be of little benefit when added to inhaled steroids, and the same applies in adolescents and adults. They are useful by themselves. In those under five years of age, they were the preferred add-on therapy after inhaled corticosteroids by the British Thoracic Society in 2009. A similar class of drugs, 5-LOX inhibitors, may be used as an alternative in the chronic treatment of mild to moderate asthma among older children and adults. As of 2013 there is one medication in this family known as zileuton.
*Mast cell stabilizers (such as cromolyn sodium) are another non-preferred alternative to corticosteroids.
Many people with asthma, like those with other chronic disorders, use alternative treatments; surveys show that roughly 50% use some form of unconventional therapy. There is little data to support the effectiveness of most of these therapies. Evidence is insufficient to support the usage of Vitamin C. There is tentative support for its use in exercise induced brochospasm.
Acupuncture is not recommended for the treatment as there is insufficient evidence to support its use. Air ionisers show no evidence that they improve asthma symptoms or benefit lung function; this applied equally to positive and negative ion generators.
Manual therapies, including osteopathic, chiropractic, physiotherapeutic and respiratory therapeutic maneuvers, have insufficient evidence to support their use in treating asthma. The Buteyko breathing technique for controlling hyperventilation may result in a reduction in medication use; however, the technique does not have any effect on lung function. Thus an expert panel felt that evidence was insufficient to support its use.
But regular Yoga with Pranayama (the breathing exercise) under the guideline of an expart shows lot of improvement among most asthma patients.
Some home remedies:
*Express the juice from garlic. Mix 10 to 15 drops in warm water and take internally for asthma relief.Mix, onion juice ¼ cup, honey 1 tablespoon and black pepper 1/8 tablespoon.Mix licorice and ginger together. Take ½ tablespoon in 1 cup of water for relief from asthma.
*Drink a glass of 2/3 carrot juice, 1/3 spinach juice, 3 times a day .
*Add 30-40 leaves of Basil in a liter of water, strain the leaves and drink the water throughout the day effective for asthma.
CLICK & READ : Breathe in & Breathe out
The evidence for the effectiveness of measures to prevent the development of asthma is weak. Some show promise including: limiting smoke exposure both in utero and after delivery, breastfeeding, and increased exposure to daycare or large families but none are well supported enough to be recommended for this indication. Early pet exposure may be useful. Results from exposure to pets at other times are inconclusive and it is only recommended that pets be removed from the home if a person has allergic symptoms to said pet. Dietary restrictions during pregnancy or when breast feeding have not been found to be effective and thus are not recommended. Reducing or eliminating compounds known to sensitive people from the work place may be effective. It is not clear if annual influenza vaccinations effects the risk of exacerbations. Immunization; however, is recommended by the World Health Organization. Smoking bans are effective in decreasing exacerbations of asthma
The prognosis for asthma is generally good, especially for children with mild disease. Mortality has decreased over the last few decades due to better recognition and improvement in care. Globally it causes moderate or severe disability in 19.4 million people as of 2004 (16 million of which are in low and middle income countries). Of asthma diagnosed during childhood, half of cases will no longer carry the diagnosis after a decade. Airway remodeling is observed, but it is unknown whether these represent harmful or beneficial changes. Early treatment with corticosteroids seems to prevent or ameliorates a decline in lung function.
YOU MAY CLICK & READ
Disclaimer: This information is not meant to be a substitute for professional medical advise or help. It is always best to consult with a Physician about serious health concerns. This information is in no way intended to diagnose or prescribe remedies.This is purely for educational purpose. |
|History of Virginia|
The History of Virginia begins with documentation by the first Spanish explorers to reach the area in the 1500s, when it was occupied chiefly by Algonquian, Iroquoian, and Siouan peoples. After a failed English attempt to colonize Virginia in the 1580s by Walter Raleigh[ citation needed], permanent English colonization began in Virginia with Jamestown, Virginia, in 1607. The Virginia Company colony was looking for gold but failed and the colonists could barely feed themselves. The famine during the harsh winter of 1609 forced the colonists to eat leather from their clothes and boots and resort to cannibalism. The colony nearly failed until tobacco emerged as a profitable export. It was grown on plantations, using primarily indentured servants for the intensive hand labor involved. After 1662, the colony turned black slavery into a hereditary racial caste. By 1750, the primary cultivators of the cash crop were West African slaves. While the plantations thrived because of the high demand for tobacco, most white settlers raised their families on subsistence farms. Warfare with the Virginia Indian nations had been a factor in the 17th century; after 1700 there was continued conflict with natives east of the Alleghenies, especially in the French and Indian War (1754-1763), when the tribes were allied with the French. The westernmost counties including Wise and Washington only became safe with the death of Bob Benge in 1794.
The Virginia Colony became the wealthiest and most populated British colony in North America, with an elected General Assembly. The colony was dominated by rich planters who were also in control of the established Anglican Church. Baptist and Methodist preachers brought the Great Awakening, welcoming black members and leading to many evangelical and racially integrated churches. Virginia planters had a major role in gaining independence and in the development of democratic-republican ideals of the United States. They were important in the Declaration of Independence, writing the Constitutional Convention (and preserving protection for the slave trade), and establishing the Bill of Rights. The state of Kentucky separated from Virginia in 1792. Four of the first five presidents were Virginians: George Washington, the "Father of his country"; and after 1800, "The Virginia Dynasty" of presidents for 24 years: Thomas Jefferson, James Madison, and James Monroe.
During the first half of the 19th century, tobacco prices declined and tobacco lands lost much of their fertility. Planters adopted mixed farming, with an emphasis on wheat and livestock, which required less labor. The Constitutions of 1830 and 1850 expanded suffrage but did not equalize white male apportionment statewide. The population grew slowly from 700,000 in 1790, to 1 million in 1830, to 1.2 million in 1860. Virginia was the largest state joining the Confederate States of America in 1861. It became the major theater of war in the American Civil War. Unionists in western Virginia created the separate state of West Virginia. Virginia's economy was devastated in the war and disrupted in Reconstruction, when it was administered as Military District Number One. The first signs of recovery were seen in tobacco cultivation and the related cigarette industry, followed by coal mining and increasing industrialization. In 1883, conservative white Democrats regained power in the state government, ending Reconstruction and implementing Jim Crow laws. The 1902 Constitution limited the number of white voters below 19th-century levels and effectively disfranchised blacks until federal civil rights legislation of the mid-1960s.
From the 1920s to the 1960s, the state was dominated by the Byrd Organization, with dominance by rural counties aligned in a Democratic party machine, but their hold was broken over their failed Massive Resistance to school integration. After World War II, the state's economy thrived, with a new industrial and urban base. A statewide community college system was developed. The first U.S. African-American governor since Reconstruction was Virginia's Douglas Wilder in 1990. Since the late 20th century, the contemporary economy has become more diversified in high-tech industries and defense-related businesses. Virginia's changing demography makes for closely divided voting in national elections but it is still generally conservative in state politics.
- 1 Prehistory
- 2 Early European exploration
- 3 Royal colony
- 4 Religion
- 5 American Revolution
- 6 Early Republic and antebellum periods
- 7 Civil War
- 8 Reconstruction
- 9 Gilded Age
- 10 Progressive Era
- 11 Interwar
- 12 WWII and Modern era
- 13 Contemporary commonwealth
- 14 Virginia history on stamps
- 15 See also
- 16 References
- 17 External links
For thousands of years before the arrival of the English, various societies of indigenous peoples inhabited the portion of the New World later designated by the English as "Virginia". Archaeological and historical research by anthropologist Helen C. Rountree and others has established 3,000 years of settlement in much of the Tidewater. Even so, a historical marker dedicated in 2015 states that recent archaeological work at Pocahontas Island has revealed prehistoric habitation dating to about 6500 BCE.
As of the 16th Century, what is now the state of Virginia was occupied by three main culture groups—the Iroquoian, the Eastern Siouan & the Algonquian. The tip of the Delmarva Peninsula south of the Indian River was controlled by the Algonquian Nanticoke. Meanwhile, the Tidewater region along the Chesapeake Bay coastline appears to have been controlled by the Algonquian Piscataway (who lived around the Potomac River), the Powhatan & Chowanoke, or Roanoke (who lived between the James River & Neuse River.). Inland of them were two Iroquoian tribes known as the Nottoway, or Managog, & the Meherrin. The rest of Virginia was almost entirely Eastern Sioux, divided between the Monaghan & the Manahoac, who held lands from central West Virginia, through southern Virginia and up to the Maryland border (the region of the Shenandoah River Valley was controlled by a different people). Also, the lands peoples connected to the Mississippian Culture may have just barely crossed over into the state into its southwestern corner. Later, these tribes merged to form the Yuchi.
Rountree has noted that "empire" more accurately describes the political structure of the Powhatan. In the late 16th and early 17th centuries, a chief named Wahunsunacock created this powerful empire by conquering or affiliating with approximately thirty tribes whose territories covered much of what is now eastern Virginia. Known as the Powhatan, or paramount chief, he called this area Tenakomakah ("densely inhabited Land"). The empire was advantageous to some tribes, who were periodically threatened by other groups, such as the Monacan. The first English colony, Jamestown, was allegedly allowed to be settled by Chief Powhatan as he wanted new military & economic advantages over the Siouans west of his people. The following chief, Opechancanough, succeeded him within only a couple of years after contact & had a much different view of the English. He led several failed uprisings, which caused his people to fracture, some tribes going south to live among the Chowanoke or north to live among the Piscataway. After that, one of his sons took several Powhatans and moved off to the northwest, becoming the Shawnee & took over former Susquehannock territories. Recorded in the states of Maryland & Pennsylvania throughout the 17th century, they eventually made their way into the Ohio River Valley, where they are believed to have merged with a variety of other native peoples to form the powerful confederacy that controlled the area that is now West Virginia until the Shawnee Wars (1811-1813). By only 1646, very few Powhatans remained and were policed harshly by the English, no longer even allowed to choose their own leaders. They were organized into the Pamunkey & Mattaponi tribes. They eventually dissolved altogether and merged into Colonial society.
The Piscataway were pushed north on the Potomac River early in their history, coming to be cut off from the rest of their people. While some stayed, others chose to migrate west. Their movements are generally unrecorded in the historical record, but they reappear at Fort Detroit in modern-day Michigan by the end of the 18th century. These Piscataways are said to have moved to Canada & probably merged with the Mississaugas, who had broken away from the Anishinaabeg & migrated southeast into that same region. Despite that, many Piscataway stayed in Virginia & Maryland until the modern day. Other members of the Piscataway also merged with the Nanticoke.
The Chowanoke were moved to reservation lands by the English in 1677, where they remained until the 19th century. By 1821, they had merged with other tribes and were generally dissolved, however the descendants of these peoples reformed in the 21st century and re-acquired much of their old reservation in 2014..
- Eastern Siouan
Many of the Siouan peoples of the state seem to have originally been a collection of smaller tribes with uncertain affiliation. Names recorded throughout the 17th century were the Monahassanough, Rassawek, Mowhemencho, Monassukapanough, Massinacack, Akenatsi, Mahoc, Nuntaneuck, Nutaly, Nahyssan, Sapon, Monakin, Toteros, Keyauwees, Shakori, Eno, Sissipahaw, Monetons & Mohetons living & migrating throughout what is now West Virginia, Virginia, North Carolina & South Carolina. All were said to have spoken, at least two distinct languages—Saponi (which appears to be a missing link language existing between the Chiwere & Dhegihan variants) & Catawba (which is most closely related to Biloxi & the Gulf Coast Siouan languages). John Smith was the first to note two groups in the Virginian interior—the Monaghans & the Monahoacs. The words came from the Powhatan & translations are uncertain, however Monaghan seems similar to a known Lenape word, Monaquen, which means "to scalp." They were also commonly referred to as the Eastern Blackfoot, which explains why some Saponi today identify as the Siouan-Blackfoot people, and later still as the Christannas.
As far as can be assumed, however, it seems that they were arranged thus—from east to west along the north shore of the James River, just inland of the Powhatan, would have been the Eno, Shakori & Saponi. Around the source of the river (& probably holding some of the river's islands a ways back east) should have been the Occaneechi, or Akenatsi. They were believed to have been the "grandfather" tribe of the region, a term among native peoples for any tribe highly respected & venerated for being the first or oldest people of their kind. West of the Occaneechi & primarily located in what is now West Virginia were, at least, two more tribes believed to have been related—the Moneton of the Kanawha River & the Tutelo of the Bluestone River, which separates West Virginia from Kentucky. About midway along the southern shores of the James River should have been the Sissipahaw. They were probably the only Eastern Siouan tribe in the state who would have spoken a form of Catawba language, rather than Saponi/ Tutelo. North of them were the Manahoac, or Mahock. The Keyauwee are also of note. It is difficult to say whether they were a subtribe of others mentioned, a newly formed tribe, or from somewhere else.
Originally existing along the entirety of the current western border of Virginia & up through some of the southwestern mountains of West Virginia & Kentucky, they seem to have first been driven east by the Iroquoian Westo during the Beaver Wars. Historians have since come to note that the Westo were almost definitely the Erie & Neutrals/ Chonnonton, who had conquered wide swathes of what is now northern & eastern Ohio approximately during the 1630s & were subsequently conquered and driven out by the Iroquois Confederacy around 1650. The Tutelo of West Virginia first seem to be noted as living north of the Saponi, in northern Virginia in around 1670. Later in the Beaver Wars, the Iroquois lost their new lands in Ohio & Michigan to the French & their new native allies around the western Great Lakes. Sometime during the 1680s-90s, the Iroquois started pushing south and declared war on the Saponi related tribes, pushing them down into North Carolina. It is noted in 1701 that the Saponi, Tutelo, Occaneechi, Shakori & Keyauwee were then going to form a confederacy to take back their homeland. The writer assumes that all five tribes were driven south, but the Tutelos are noted as allies from the "western mountains." This is the same year that the Iroquois surrendered to the French, but it appears that hostilities with the Saponi continued long term. The Iroquois were soon after convinced by the English to start selling off all their extended lands, which were nearly impossible for them to hold. All they kept was a string of territory along the Susquahanna River in Pennsylvania.
The Saponi attempted to return to their lands, but were unable to do so. Around 1702, the Governor of the Virginia Colony gave them reservation land & opened Fort Christanna nearby. All the tribes appear to have returned, sans the Keyauwee, who remained among the Catawba. They came to be known as the Christanna People at this time. This fort offered economic & educational aid to the locals, but after the fort closed in 1718, the Saponi dispersed. With continued conflicts between the Saponi & Iroquios in the region, the governors of Virginia, Pennsylvania & New York all stepped in together to organize a peace treaty, which did ultimately end the conflict. Sometime around 1722, the Tutelo & some other Saponis migrated to the Iroquoian held Pennsylvania territory and settled there, among many other refugees of local tribes who had been destroyed, absorbed into Colonial society, or simply moved on without them. In 1753, the Iroquois reorganized them all into Tutelo, Delaware & Nanticoke Tribes, relocated them to New York & gave them full honors among the Confederacy, despite none of them being Iroquoian. After the American Revolution, these tribes accompanied them to Canada. Later, the descendants of the Tutelos migrated again to Ohio, becoming the Saponi & Tutelo Tribes of Ohio. Many of the other Siouan peoples of Virginia were also noted to have merged with the Catawba & Yamasee tribes.
While mainly noted in Virginia, it appears that the Tuscarora migrated into the region from the Delmarva Peninsula early in the 17th century. John Smith noted them on an early map as the Kuskarawocks. (They may have also absorbed the Tockwoghs, who also appear on the map & were most likely Iroquoian.) After an extended war with the English, the Tuscarora began leaving for New York & began merging with the Iroquois in groups around 1720, continuing approximately until the Iroquois were banished to Canada following the American Revolution. Those who remained became a new tribe—the Coharie—and migrated south to live near the Meherrin.
The Meherrin aided the Tuscarora in that war, but did not follow them north. In 1717, the English gave them a reservation just south of the North Carolina border. The North Carolina government contested their land rights and tried to take them away due to a surveyor's error that caused both Native & English settlers to claim parts of the reservation. However, they managed to, more or less, stay put well into the modern day. The Nottoway also managed to largely stay in the vicinity of Virginia until the modern day without much conflict or loss of heritage.
Although the Beaver Wars were primarily centered in Ohio, the Iroquois Confederacy of New York were also in a long strung conflict with the Susquehannocks of central Pennsylvania, as was the English colony of Maryland, although the two were not known to be allies themselves. Sometime around the 1650s or 1660s, Maryland made peace with & allied themselves to the Susquehannocks, thus the Iroquois labelled them an enemy as well, despite being allied with England by this time. After ending their war with the Susquehannocks in 1674, however, the Iroquois went on a more or less inexplicable rampage against Maryland and its remaining Native allies, which included the Piscataways and the Eastern Siouans tribes. The Eastern Siouans were forced out of the state during the 1680s. After the Beaver Wars officially ended in 1701, the Iroquois sold off their extended holdings—including their land in Virginia—to the English.
In the mid 17th century, around 1655-6, an Iroquoian group known as the Westo invaded Virginia. While many theories abound as to their origins, they appear to have been the last of the Eries & Chonnontons who invaded Ohio at the start of the Beaver Wars. The Westo seem to have pushed into southern West Virginia, then moved straight south to move on the smaller Siouan tribes of the Carolinas. In the 1680s, they were destroyed by a coalition of native warriors led by a tribe called the Sawanno. Some have speculated that these were the Shawnee, however, the Shawnee should not have been anywhere near the region at the time & the word is a legitimate word from the Yuchi language, meaning this tribe may have been someone else. There is also a note from the Cherokee that a group of "Shawnee" were living among them in the 1660s (following the Westo invasion, but prior to their defeat), then migrated into southern West Virginia.
- Other Tribes of Note
The first Spanish & English explorers appear to have greatly overestimated the size of the Cherokee, placing them as far north as Virginia. However, many historians now believe that there was a large, mixed race/ mixed language confederacy in the region, called the Coosa. The Spanish also gave them the nicknames Chalaques & Uchis during the 16th century & the English turned Chalaques into Cherokees. The Cherokees we know today were among these people, but lived much further south & both the Cherokee language (of Iroquoian origin) & the Yuchi language (Muskogean) have been heavily modified by Siouan influence & carry many Siouan borrow words. This nation would have existed throughout parts of the states of Virginia, Kentucky, Tennessee, North & South Carolina & Georgia, with cores of different culture groups organized at different extremes of the territory &, probably, speaking Yuchi as a common tongue.
After the Westo punched straight through them, they seem to have split along the line of the Tennessee River to create the Cherokee to the south & the Yuchi to the north. Then, following the Yamasee War (1715–1717), the Yuchi were force across Appalachia & split again, into the Coyaha & the Chisca. The French, seeing an opportunity for new allies, ingratiated themselves with the Chisca and had them relocated to the heart of the Illinois Colony to live among the Algonquian Ilinoweg. Later, as French influence along the Ohio River waned, the tribe seems to have split away again, taking many Ilinoweg tribes with them, and moved back to Kentucky, where they became the Kispoko. The Kispoko later became the fourth tribe of Shawnee.
Meanwhile, the Coyaha reforged their alliance with the Cherokee & brought in many of the smaller Muskogean tribes of Alabama (often referred to as the Mobilians) to form the Creek Confederacy. While this tribe would go on to have great historical influence to the remaining Colonial Era & the early history of the United States, they never returned to Virginia.
Furthermore, alike the Sawannos, it seems many splinter groups fractured off from the core group and moved into places like West Virginia & Kentucky. Afterwards, those lands seemed to be filled with native peoples who claimed "Cherokee" ancestry, yet had no organized tribal affiliation. The descendants of those people live throughout West Virginia, Pennsylvania, Kentucky & Ohio today. However, it also seems probable that these populations married into the surviving Monongahela & other Siouan groups, yet the populations must have been quite small on both sides to allow that these peoples never reformed a government & remained nomadic for a great deal of time afterwards.
This section needs additional citations for verification. (February 2016) ( Learn how and when to remove this template message)
After their discovery of the New World in the 15th century, European states began trying to establish New World colonies. England, the Dutch Republic, France, Portugal, and Spain were the most active.
In 1540, a party led by two Spaniards, Juan de Villalobos and Francisco de Silvera, sent by Hernando de Soto, entered what is now Lee County in search of gold. In the spring of 1567, Hernando Moyano de Morales, a sergeant of Spanish explorer Juan Pardo, led a group of soldiers northward from Fort San Juan in Joara, a native town in what is now western North Carolina, to attack and destroy the Chisca village of Maniatique near present-day Saltville. The attack near Saltville was the first recorded battle in Virginia history.
Another Spanish party, captained by Antonio Velázquez in the caravel Santa Catalina, explored to the lower Chesapeake Bay region of Virginia in mid-1561 under the orders of Ángel de Villafañe. During this voyage, two Kiskiack or Paspahegh youths, including Don Luis were taken back to Spain. In 1566, an expedition sent from Spanish Florida by Pedro Menéndez de Avilés reached the Delmarva Peninsula. The expedition consisted of two Dominican friars, thirty soldiers and Don Luis, in a failed effort to set up a Spanish colony in the Chesapeake, believing it to be an opening to the fabled Northwest Passage.
In 1570, Spanish Jesuits established the Ajacán Mission on the lower peninsula. However, in 1571 it was destroyed by Don Luis and a party of his indigenous allies. In August 1572, Pedro Menéndez de Avilés arrived from St. Augustine with thirty soldiers and sailors to take revenge for the massacre of the Jesuits, and hanged approximately 20 natives. In 1573, the governor of Spanish Florida, Pedro Menéndez de Márquez, conducted further exploration of the Chesapeake. In the 1580s, captain Vicente González led several voyages into the Chesapeake in search of English settlements in the area. In 1609, Spanish Florida governor Pedro de Ibarra sent Francisco Fernández de Écija from St. Augustine to survey the activities of the Jamestown colonists, yet Spain never attempted a colony after the failure of the Ajacán Mission.
The Roanoke Colony was the first English colony in the New World. It was founded at Roanoke Island in what was then Virginia, now part of Dare County, North Carolina. Between 1584 and 1587, there were two major groups of settlers sponsored by Sir Walter Raleigh who attempted to establish a permanent settlement at Roanoke Island, and each failed. The final group disappeared completely after supplies from England were delayed three years by a war with Spain. Because they disappeared, they were called "The Lost Colony."
The name Virginia came from information gathered by the Raleigh-sponsored English explorations along what is now the North Carolina coast. Philip Amadas and Arthur Barlowe reported that a regional "king" named Wingina ruled a land of Wingandacoa. Queen Elizabeth modified the name to "Virginia", perhaps in part noting her status as the "Virgin Queen." Although the word is latinate, it stands as the oldest English language place-name in the United States.
On the second voyage, Raleigh discovered that, while the chief of the Secotans was indeed called Wingina, the expression wingandacoa, heard by the English upon arrival, actually meant "You wear good clothes" in Carolina Algonquian, and was not the native name of the country, as previously misunderstood. [ page needed]
After the death of Queen Elizabeth I, in 1603 King James I assumed the throne of England. After years of war, England was strapped for funds, so he granted responsibility for England's New World colonization to the Virginia Company, which became incorporated as a joint stock company by a proprietary charter drawn up in 1606. There were two competing branches of the Virginia Company and each hoped to establish a colony in Virginia in order to exploit gold (which the region did not actually have), to establish a base of support for English privateering against Spanish ships, and to spread Protestantism to the New World in competition with Spain's spread of Catholicism. Within the Virginia Company, the Plymouth Company branch was assigned a northern portion of the area known as Virginia, and the London Company area to the south.
In December 1606, the London Company dispatched a group of 104 colonists in three ships: the Susan Constant, Godspeed, and Discovery, under the command of Captain Christopher Newport. After a long, rough voyage of 144 days, the colonists finally arrived in Virginia on April 26, 1607 at the entrance to the Chesapeake Bay. At Cape Henry, they went ashore, erected a cross, and did a small amount of exploring, an event which came to be called the "First Landing."
Under orders from London to seek a more inland location safe from Spanish raids, they explored the Hampton Roads area and sailed up the newly christened James River to the Fall Line at what would later become the cities of Richmond and Manchester.
After weeks of exploration, the colonists selected a location and founded Jamestown on May 14, 1607. It was named in honor of King James I (as was the river). However, while the location at Jamestown Island was favorable for defense against foreign ships, the low and marshy terrain was harsh and inhospitable for a settlement. It lacked drinking water, access to game for hunting, or much space for farming. While it seemed favorable that it was not inhabited by the Native Americans, within a short time, the colonists were attacked by members of the local Paspahegh tribe.
The colonists arrived ill-prepared to become self-sufficient. They had planned on trading with the Native Americans for food, were dependent upon periodic supplies from England, and had planned to spend some of their time seeking gold. Leaving the Discovery behind for their use, Captain Newport returned to England with the Susan Constant and the Godspeed, and came back twice during 1608 with the First Supply and Second Supply missions. Trading and relations with the Native Americans was tenuous at best, and many of the colonists died from disease, starvation, and conflicts with the natives. After several failed leaders, Captain John Smith took charge of the settlement, and many credit him with sustaining the colony during its first years, as he had some success in trading for food and leading the discouraged colonists.
After Smith's return to England in August 1609, there was a long delay in the scheduled arrival of supplies. During the winter of 1609/10 and continuing into the spring and early summer, no more ships arrived. The colonists faced what became known as the "starving time". When the new governor Sir Thomas Gates, finally arrived at Jamestown on May 23, 1610, along with other survivors of the wreck of the Sea Venture that resulted in Bermuda being added to the territory of Virginia, he discovered over 80% of the 500 colonists had died; many of the survivors were sick.
Back in England, the Virginia Company was reorganized under its Second Charter, ratified on May 23, 1609, which gave most leadership authority of the colony to the governor, the newly appointed Thomas West, 3rd Baron De La Warr. In June 1610, he arrived with 150 men and ample supplies. De La Warr began the First Anglo-Powhatan War, against the natives. Under his leadership, Samuel Argall kidnapped Pocahontas, daughter of the Powhatan chief, and held her at Henricus.
The economy of the Colony was another problem. Gold had never been found, and efforts to introduce profitable industries in the colony had all failed until John Rolfe introduced his two foreign types of tobacco: Orinoco and Sweet Scented. These produced a better crop than the local variety and with the first shipment to England in 1612, the customers enjoyed the flavor, thus making tobacco a cash crop that established Virginia's economic viability.
The First Anglo-Powhatan War ended when Rolfe married Pocahontas in 1614.
George Yeardley took over as Governor of Virginia in 1619. He ended one-man rule and created a representative system of government with the General Assembly, the first elected legislative assembly in the New World.
Also in 1619, the Virginia Company sent 90 single women as potential wives for the male colonists to help populate the settlement. That same year the colony acquired a group of "twenty and odd" Angolans, brought by two English privateers. They were probably the first Africans in the colony. They, along with many European indentured servants helped to expand the growing tobacco industry which was already the colony's primary product. Although these black men were treated as indentured servants, this marked the beginning of America's history of slavery. Major importation of enslaved Africans by European slave traders did not take place until much later in the century.
In some areas, individual rather than communal land ownership or leaseholds were established, providing families with motivation to increase production, improve standards of living, and gain wealth. Perhaps nowhere was this more progressive than at Sir Thomas Dale's ill-fated Henricus, a westerly-lying development located along the south bank of the James River, where natives were also to be provided an education at the Colony's first college.
About 6 miles (9.7 km) south of the falls at present-day Richmond, in Henrico Cittie, the Falling Creek Ironworks was established near the confluence of Falling Creek, using local ore deposits to make iron. It was the first in North America.
Virginians were intensely individualistic at this point, weakening the small new communities. According to Breen (1979) their horizon was limited by the present or near future. They believed that the environment could and should be forced to yield quick financial returns. Thus everyone was looking out for number one at the expense of the cooperative ventures. Farms were scattered and few villages or towns were formed. This extreme individualism led to the failure of the settlers to provide defense for themselves against the Indians, resulting in two massacres.
English settlers soon came into conflict with the natives. Despite some successful interaction, issues of ownership and control of land and other resources, and trust between the peoples, became areas of conflict. Virginia has drought conditions an average of every three years. The colonists did not understand that the natives were ill-prepared to feed them during hard times. In the years after 1612, the colonists cleared land to farm export tobacco, their crucial cash crop. As tobacco exhausted the soil, the settlers continually needed to clear more land for replacement. This reduced the wooded land which Native Americans depended on for hunting to supplement their food crops. As more colonists arrived, they wanted more land.
The tribes tried to fight the encroachment by the colonists. Major conflicts took place in the Indian massacre of 1622 and the Second Anglo-Powhatan war, both under the leadership of the late Chief Powhatan's younger brother, Chief Opechancanough. By the mid-17th century, the Powhatan and allied tribes were in serious decline in population, due in large part to epidemics of newly introduced infectious diseases, such as smallpox and measles, to which they had no natural immunity. The European colonists had expanded territory so that they controlled virtually all the land east of the fall line on the James River. Fifty years earlier, this territory had been the empire of the mighty Powhatan Confederacy.
Surviving members of many tribes assimilated into the general population of the colony. Some retained small communities with more traditional identity and heritage. In the 21st century, the Pamunkey and Mattaponi are the only two tribes to maintain reservations originally assigned under the English. As of 2010 [update], the state has recognized eleven Virginia Indian tribes. Others have renewed interest in seeking state and Federal recognition since the celebration of the 400th anniversary of Jamestown in 2007. State celebrations gave Native American tribes prominent formal roles to showcase their contributions to the state.
While the developments of 1619 and continued growth in the several following years were seen as favorable by the English, many aspects, especially the continued need for more land to grow tobacco, were the source of increasing concern to the Native Americans most affected, the Powhatan.
By this time, the remaining Powhatan Empire was led by Chief Opechancanough, chief of the Pamunkey, and brother of Chief Powhatan. He had earned a reputation as a fierce warrior under his brother's chiefdom. Soon, he gave up on hopes of diplomacy, and resolved to eradicate the English colonists.
On March 22, 1622, the Powhatan killed about 400 colonists in the Indian Massacre of 1622. With coordinated attacks, they struck almost all the English settlements along the James River, on both shores, from Newport News Point on the east at Hampton Roads all the way west upriver to Falling Creek, a few miles above Henricus and John Rolfe's plantation, Varina Farms.
At Jamestown, a warning by an Indian boy named Chanco to his employer, Richard Pace, helped reduce total deaths. Pace secured his plantation, and rowed across the river during the night to alert Jamestown, which allowed colonists some defensive preparation. They had no time to warn outposts, which suffered deaths and captives at almost every location. Several entire communities were essentially wiped out, including Henricus and Wolstenholme Towne at Martin's Hundred. At the Falling Creek Ironworks, which had been seen as promising for the Colony, two women and three children were among the 27 killed, leaving only two colonists alive. The facilities were destroyed.
Despite the losses, two thirds of the colonists survived; after withdrawing to Jamestown, many returned to the outlying plantations, although some were abandoned. The English carried out reprisals against the Powhatan and there were skirmishes and attacks for about a year before the colonists and Powhatan struck a truce.
The colonists invited the chiefs and warriors to Jamestown, where they proposed a toast of liquor. Dr. John Potts and some of the Jamestown leadership had poisoned the natives' share of the liquor, which killed about 200 men. Colonists killed another 50 Indians by hand.
The period between the coup of 1622 and another Powhatan attack on English colonists along the James River (see Jamestown) in 1644 marked a turning point in the relations between the Powhatan and the English. In the early period, each side believed it was operating from a position of power; by the Treaty of 1646, the colonists had taken the balance of power, and had established control between the York and Blackwater Rivers.
In 1624, the Virginia Company's charter was revoked and the colony transferred to royal authority as a crown colony, but the elected representatives in Jamestown continued to exercise a fair amount of power. Under royal authority, the colony began to expand to the North and West with additional settlements.
In 1634, a new system of local government was created in the Virginia Colony by order of the King of England. Eight shires were designated, each with its own local officers; these shires were renamed as counties only a few years later.
The first significant attempts at exploring the Trans-Allegheny region occurred under the administration of Governor William Berkeley. Efforts to explore farther into Virginia were hampered in 1644 when about 500 colonists were killed in another Indian massacre led, once again, by Opechancanough. Berkeley is credited with efforts to develop others sources of income for the colony besides tobacco such as cultivation of mulberry trees for silkworms and other crops at his large Green Spring Plantation.
The colonists defined the 1644 coup as an "uprising". Chief Opechancanough expected the outcome would reflect what he considered the morally correct position: that the colonists were violating their pledges to the Powhatan. During the 1644 event, Chief Opechancanough was captured. While imprisoned, he was murdered by one of his guards. After the death of Opechancanough, and following the repeated colonial attacks in 1644 and 1645, the remaining Powhatan tribes had little alternative but to accede to the demands of the settlers.
Most Virginia colonists were loyal to the crown (Charles I) during the English Civil War, but in 1652, Oliver Cromwell sent a force to remove and replace Gov. Berkeley with Governor Richard Bennett, who was loyal to the Commonwealth of England. This governor was a moderate Puritan who allowed the local legislature to exercise most controlling authority, and spent much of his time directing affairs in neighboring Maryland Colony. Bennett was followed by two more "Cromwellian" governors, Edward Digges and Samuel Matthews, although in fact all three of these men were not technically appointees, but were selected by the House of Burgesses, which was really in control of the colony during these years.
Many royalists fled to Virginia after their defeat in the English Civil War. Some intermarried with existing plantation families to establish influential families in Virginia such as the Washingtons, Randolphs, Carters and Lees. However, most 17th-century immigrants were indentured servants, merchants or artisans. After the Restoration, in recognition of Virginia's loyalty to the crown, King Charles II of England bestowed Virginia with the nickname "The Old Dominion", which it still bears today.
Governor Berkeley, who remained popular after his first administration, returned to the governorship at the end of Commonwealth rule. However, Berkeley's second administration was characterized with many problems. Disease, hurricanes, Indian hostilities, and economic difficulties all plagued Virginia at this time. Berkeley established autocratic authority over the colony. To protect this power, he refused to have new legislative elections for 14 years in order to protect a House of Burgesses that supported him. He only agreed to new elections when rebellion became a serious threat.
Berkeley finally did face a rebellion in 1676. Indians had begun attacking encroaching settlers as they expanded to the north and west. Serious fighting broke out when settlers responded to violence with a counter-attack against the wrong tribe, which further extended the violence. Berkeley did not assist the settlers in their fight. Many settlers and historians believe Berkeley's refusal to fight the Indians stemmed from his investments in the fur trade. Large scale fighting would have cut off the Indian suppliers Berkeley's investment relied on. Nathaniel Bacon organized his own militia of settlers who retaliated against the Indians. Bacon became very popular as the primary opponent of Berkeley, not only on the issue of Indians, but on other issues as well. Berkeley condemned Bacon as a rebel, but pardoned him after Bacon won a seat in the House of Burgesses and accepted it peacefully. After a lack of reform, Bacon rebelled outright, captured Jamestown, and took control of the colony for several months. The incident became known as Bacon's Rebellion. Berkeley returned himself to power with the help of the English militia. Bacon burned Jamestown before abandoning it and continued his rebellion, but died of disease. Berkeley severely crushed the remaining rebels.
In response to Berkeley's harsh repression of the rebels, the English government removed him from office. After the burning of Jamestown, the capital was temporarily moved to Middle Plantation, located on the high ground of the Virginia Peninsula equidistant from the James and York Rivers.
Local leaders had long desired a school of higher education, for the sons of planters, and for educating the Indians. An earlier attempt to establish a permanent university at Henricus failed after the Indian Massacre of 1622 wiped out the entire settlement. Finally, seven decades later, with encouragement from the Colony's House of Burgesses and other prominent individuals, Reverend Dr. James Blair, the colony's top religious leader, prepared a plan. Blair went to England and in 1693, obtained a charter from Protestants King William and Queen Mary II of England who had just deposed Catholic James II of England in 1688 during the Glorious Revolution. The college was named the College of William and Mary in honor of the two monarchs.
The rebuilt statehouse in Jamestown burned again in 1698. After that fire, upon suggestion of college students, the colonial capital was permanently moved to nearby Middle Plantation again, and the town was renamed Williamsburg, in honor of the king. Plans were made to construct a capitol building and plan the new city according to the survey of Theodorick Bland.
As the English increasingly used tobacco products, tobacco in the American colonies became a significant economic force, especially in the tidewater region surrounding the Chesapeake Bay. Vast plantations were built along the rivers of Virginia, and social/economic systems developed to grow and distribute this cash crop. Some elements of this system included the importation and employment of slaves to grow crops. Planters would then fill large hogsheads with tobacco and convey them to inspection warehouses. In 1730, the Virginia House of Burgesses standardized and improved quality of tobacco exported by establishing the Tobacco Inspection Act of 1730, which required inspectors to grade tobacco at 40 specified locations.
In terms of the white population, the top five percent or so were planters who possessed growing wealth and increasing political power and social prestige. They controlled the local Anglican church, choosing ministers and handling church property and disbursing local charity. They sought elected and appointed offices. About 60 percent of white Virginians were part of a broad middle class that owned substantial farms; By the second generation, death rates from malaria and other local diseases had declined so much that a stable family structure was possible. The bottom third owned no land, and verged on poverty. Many were recent arrivals, or recently released from indentured servitude. Social stratification was most severe in the Northern Neck, where the Fairfax family had been given a proprietorship. In some districts there 70 percent of the land was owned by a handful of families, and three-fourths of the whites had no land at all. In the frontier districts, large numbers of Irish and German Protestants had settled, often moving down from Pennsylvania. Tobacco was not important there; farmers focused on hemp, grain, cattle, and horses. Entrepreneurs had begun to mine and smelt the local iron ores.
Sports occupied a great deal of attention at every social level, starting at the top. In England hunting was sharply restricted to landowners, and enforced by armed gameskeepers. In America, game was more than plentiful. Everyone—including servants and slaves—could and did hunt. Poor men with a good rifle aim won praise; rich gentlemen who were off target won ridicule. In 1691, Sir Francis Nicholson, the governor, organized competitions for the "better sort of Virginians onely who are Batchelors," and he offered prizes "to be shot for, wrastled, played at backswords, & Run for by Horse and foott." Horse racing was the main event. The typical farmer did not own a horse in the first place, and racing was a matter for gentlemen only, but ordinary farmers were spectators and gamblers. Selected slaves often became skilled horse trainers. Horse racing was especially important for knitting the gentry together. The race was a major public event designed to demonstrate to the world the superior social status of the gentry through expensive breeding, training, boasting and gambling, and especially winning the races themselves. Historian Timothy Breen explains that horse racing and high-stakes gambling were essential to maintaining the status of the gentry. When they publicly bet a large sum on their favorite horse, it told the world that competitiveness, individualism, and materialism where the core elements of gentry values.
Historian Edmund Morgan (1975) argues that Virginians in the 1650s—and for the next two centuries—turned to slavery and a racial divide as an alternative to class conflict. "Racism made it possible for white Virginians to develop a devotion to the equality that English republicans had declared to be the soul of liberty." That is, white men became politically much more equal than was possible without a population of low-status slaves.
By 1700, the population reached 70,000 and continued to grow rapidly from a high birth rate, low death rate, importation of slaves from the Caribbean, and immigration from Britain and Germany, as well as from Pennsylvania. The climate was mild, the farm lands were cheap and fertile.
In 1716, Governor Alexander Spotswood led the Knights of the Golden Horseshoe Expedition, reaching the top ridge of the Blue Ridge Mountains at Swift Run Gap (elevation 2,365 feet (721 m)). Spotswood promoted Germanna, a settlement of German immigrants brought over for the purpose of iron production, in modern-day Orange County.
By the 1730s, the Three Notch'd Road extended from the vicinity of the fall line of the James River at the future site of Richmond westerly to the Shenandoah Valley, crossing the Blue Ridge Mountains at Jarmans Gap. Around this time, Governor William Gooch promoted settlement of the Virginia backcountry as a means to insulate the Virginia colony from Native American and New France settlements in the Ohio Country In response, a wide variety of settlers traveled southward on the Indian Trail later known as the Great Wagon Road along the Shenandoah Valley from Pennsylvania. Many, including German Palatines and Scotch-Irish American immigrants, settled along former Indian camps. According to Encyclopedia Virginia, "By 1735 there were as many as 160 families in the backcountry region, and within ten years nearly 10,000 Europeans lived in the Shenandoah Valley."
As colonial settlement moved into the piedmont area from the Tidewater/ Chesapeake area, There was some uncertainty as to the exact tax boundaries of Virginia land versus the Land patent quit-rent rights held by Thomas Fairfax, 6th Lord Fairfax of Cameron in the Northern Neck Proprietary. When Robert "King" Carter died in 1732, Lord Fairfax read about his vast wealth in The Gentleman's Magazine and decided to settle the matter himself by coming to Virginia. Lord Fairfax travelled to Virginia for the first time between 1735 and 1737 to inspect and protect his lands. He employed a young George Washington (Washington's first employment) to survey his lands lying west of the Blue Ridge. Once this legal battle was ironed out, Frederick County, Virginia was founded in 1743 and the "Frederick Town" settlements there became a fourth city charter in Virginia, now known as Winchester, Virginia in February 1752.
In the late 1740s and the second half of the 18th century, the British angled for control of the Ohio Country. Virginians Thomas Lee and brothers Lawrence and Augustine Washington organized the Ohio Company to represent the prospecting and trading interests of Virginian investors. In 1749, the British Crown, via the colonial government of Virginia, granted the Ohio Company a great deal of this territory on the condition that it be settled by British colonists. Governor Robert Dinwiddie of Virginia was an investor in the Ohio Company, which stood to lose money if the French held their claim. To counter the French military presence in Ohio, in October 1753 Dinwiddie ordered the 21-year-old Major George Washington (whose brother was another Ohio Company investor) of the Virginia Regiment to warn the French to leave Virginia territory. Ultimately, many Virginians were caught up in the resulting French and Indian War that occurred 1754–1763. At the completion of the war, the Royal Proclamation of 1763 forbade all British settlement past a line drawn along the Appalachian Mountains, with the land west of the Proclamation Line known as the Indian Reserve. British colonists and land speculators objected to the proclamation boundary since the British government had already assigned land grants to them. Many settlements already existed beyond the proclamation line, some of which had been temporarily evacuated during Pontiac's War, and there were many already granted land claims yet to be settled. For example, George Washington and his Virginia soldiers had been granted lands past the boundary. Prominent American colonials joined with the land speculators in Britain to lobby the government to move the line further west. Their efforts were successful, and the boundary line was adjusted in a series of treaties with the Native Americans. In 1768, the Treaty of Fort Stanwix and the Treaty of Hard Labour, followed in 1770 by the Treaty of Lochaber, opened much of what is now Kentucky and West Virginia to British settlement within the Virginia Colony. However, the Northwest Territories north of the Ohio continued to be occupied by native tribes until US forces drove them out in the early decades of the 1800s.
- Further information: Episcopal Diocese of Virginia:History
The Church of England was legally established in the colony in 1619, and the Bishop of London sent in 22 Anglican clergyman by 1624. In practice, establishment meant that local taxes were funneled through the local parish to handle the needs of local government, such as roads and poor relief, in addition to the salary of the minister. There never was a bishop in colonial Virginia, and in practice the local vestry, consisting of gentry laymen controlled the parish. By the 1740s, the Anglicans had about 70 parish priests around the colony.
The stress on personal piety opened the way for the First Great Awakening in the mid 18th century, which pulled people away from the formal rituals of the established church. Especially in the back country, most families had no religious affiliation whatsoever and their low moral standards were shocking to proper Englishmen The Baptists, Methodists, Presbyterians and other evangelicals directly challenged these lax moral standards and refused to tolerate them in their ranks. Baptists, German Lutherans and Presbyterians, funded their own ministers, and favored disestablishment of the Anglican church.
The spellbinding preacher Samuel Davies led the Presbyterians, and converted hundreds of slaves. By the 1760s Baptists were drawing Virginians, especially poor white farmers, into a new, much more democratic religion. Slaves were welcome at the services and many became Baptists at this time. Methodist missionaries were also active in the late colonial period. Methodists encouraged an end to slavery, and welcomed free blacks and slaves into active roles in the congregations.
The Baptists and Presbyterians were subject to many legal constraints and faced growing persecution; between 1768 and 1774, about half of the Baptists ministers in Virginia were jailed for preaching, in defiance of England's Act of Toleration of 1689 that guaranteed freedom of worship for Protestants. At the start of the Revolution, the Anglican Patriots realized that they needed dissenter support for effective wartime mobilization, so they met most of the dissenters' demands in return for their support of the war effort.
Historians have debated the implications of the religious rivalries for the American Revolution. The struggle for religious toleration was played out during the American Revolution, as the Baptists, in alliance with Thomas Jefferson and James Madison, worked successfully to disestablish the Anglican church. After the American victory in the war, the Anglican establishment sought to reintroduce state support for religion. This effort failed when non-Anglicans gave their support to Jefferson's "Bill for Establishing Religious Freedom", which eventually became law in 1786 as the Virginia Statute for Religious Freedom. With freedom of religion the new watchword, the Church of England was dis-established in Virginia. It was rebuilt as the Episcopal Church in the United States, with no connection to Britain.
Revolutionary sentiments first began appearing in Virginia shortly after the French and Indian War ended in 1763. The Virginia legislature had passed the Two-Penny Act to stop clerical salaries from inflating. King George III vetoed the measure, and clergy sued for back salaries. Patrick Henry first came to prominence by arguing in the case of Parson's Cause against the veto, which he declared tyrannical.
The British government had accumulated a great deal of debt through spending on its wars. To help payoff this debt, Parliament passed the Sugar Act in 1764 and the Stamp Act in 1765. The General Assembly opposed the passage of the Sugar Act on the grounds of no taxation without representation, and in turn passing the " Virginia Resolves" opposing the tax. Governor Francis Fauquier responded by dismissing the Assembly. The Northampton County court overturned the Stamp Act February 8, 1766. Various political groups, including the Sons of Liberty met and issued protests against the act. Most notably, Richard Bland published a pamphlet entitled An Enquiry into the Rights of The British Colonies, setting forth the principle that Virginia was a part of the British Empire, not the Kingdom of Great Britain, so it only owed allegiance to the Crown, not Parliament.
The Stamp Act was repealed, but additional taxation from the Revenue Act and the 1769 attempt to transport Bostonian rioters to London for trial incited more protest from Virginia. The Assembly met to consider resolutions condemning on the transport of the rioters, but Governor Botetourt, while sympathetic, dissolved the legislature. The Burgesses reconvened in Raleigh Tavern and made an agreement to ban British imports. Britain gave up the attempt to extradite the prisoners and lifted all taxes except the tax on tea in 1770.
In 1773, because of a renewed attempt to extradite Americans to Britain, Richard Henry Lee, Thomas Jefferson, Patrick Henry, George Mason, and others in the legislature created a committee of correspondence to deal with problems with Britain. This committee would serve as the foundation for Virginia's role in the American Revolution.
After the House of Burgesses expressed solidarity with the actions in Massachusetts, the Governor, Lord Dunmore, again dissolved the legislature. The first Virginia Convention was held August 1–6 to respond to the growing crisis. The convention approved a boycott of British goods and elected delegates to the Continental Congress.
On April 20, 1775, Dunmore ordered the gunpowder removed from the Williamsburg Magazine to a British ship. Patrick Henry led a group of Virginia militia from Hanover in response to Dunmore's order. Carter Braxton negotiated a resolution to the Gunpowder Incident by transferring royal funds as payment for the powder. The incident exacerbated Dunmore's declining popularity. He fled the Governor's Palace to a British ship at Yorktown. On November 7, Dunmore issued a proclamation declaring Virginia was in a state of rebellion. By this time, George Washington had been appointed head of the American forces by the Continental Congress and Virginia was under the political leadership of a Committee of Safety formed by the Third Virginia Convention in the governor's absence.
On December 9, 1775, Virginia militia moved on the governor's forces at the Battle of Great Bridge, winning a victory in the small action there. Dunmore responded by bombarding Norfolk with his ships on January 1, 1776. After the Battle of Great Bridge, little military conflict took place on Virginia soil for the first part of the American Revolutionary War. Nevertheless, Virginia sent forces to help in the fighting to the North and South, as well as the frontier in the northwest.
The Fifth Virginia Convention met on May 6 and declared Virginia a free and independent state on May 15, 1776. The convention instructed its delegates to introduce a resolution for independence at the Continental Congress. Richard Henry Lee introduced the measure on June 7. While the Congress debated, the Virginia Convention adopted George Mason's Bill of Rights (June 12) and a constitution (June 29) which established an independent commonwealth. Congress approved Lee's proposal on July 2 and approved Jefferson's Declaration of Independence on July 4. The constitution of the Fifth Virginia Convention created a system of government for the state that would last for 54 years, and converting House of Burgesses into a bicameral legislature with both a House of Delegates and a Senate. Patrick Henry serves as the first Governor of the Commonwealth (1776-1779).
The British briefly brought the war back to coastal Virginia in May 1779. Fearing the vulnerability of Williamsburg, Governor Thomas Jefferson moved the capital farther inland to Richmond in 1780. However, in December, Benedict Arnold, who had betrayed the Revolution and become a general for the British, attacked Richmond and burned part of the city before the Virginia Militia drove his army out of the city.
Arnold moved his base of operations to Portsmouth and was later joined by troops under General William Phillips. Phillips led an expedition that destroyed military and economic targets, against ineffectual militia resistance. The state's defenses, led by General Baron von Steuben, put up resistance in the April 1781 Battle of Blandford, but were forced to retreat. The French General Lafayette and his forces arrived to help defend Virginia, and though outnumbered, engaged British forces under General Charles Cornwallis in a series of skirmishes to help reduce their effectiveness. Cornwallis dispatched two smaller missions under Colonel John Graves Simcoe and Colonel Banastre Tarleton to march on Charlottesville and capture Gov. Jefferson and the legislature, though was foiled when Jack Jouett rode to warn Virginia government.
Cornwallis moved down the Virginia Peninsula towards the Chesapeake Bay, where Clinton planned to extract part of the army for a siege of New York City. After surprising American forces at the Battle of Green Spring on July 6, 1781, Cornwallis received orders to move his troops to the port town of Yorktown and begin construction of fortifications and a naval yard, though when discovered American forces surrounded the town. Gen. Washington and his French ally Rochambeau moved their forces from New York to Virginia. The defeat of the Royal Navy by Admiral de Grasse at the Battle of the Virginia Capes ensured French dominance of the waters around Yorktown, thereby preventing Cornwallis from receiving troops or supplies and removing the possibility of evacuation. Following the two-week siege to Yorktown, Cornwallis decided to surrender. Papers for surrender were officially signed on October 19.
As a result of the defeat, the king lost control of Parliament and the new British government offered peace in April 1782. The Treaty of Paris of 1783 officially ended the war.
Victory in the Revolution brought peace and prosperity to the new state, as export markets in Europe reopened for its tobacco.
While the old local elites were content with the status quo, younger veterans of the war had developed a national identity. Led by George Washington and James Madison, Virginia played a major role in the Constitutional Convention of 1787 in Philadelphia. Madison proposed the Virginia Plan, which would give representation in Congress according to total population, including a proportion of slaves. Virginia was the most populous state, and it was allowed to count all of its white residents and 3/5 of the enslaved African Americans for its congressional representation and its electoral vote. (Only white men who owned a certain amount of property could vote.) Ratification was bitterly contested; the pro-Constitution forces prevailed only after promising to add a Bill of Rights. The Virginia Ratifying Convention approved the Constitution by a vote of 89–79 on June 25, 1788, making it the tenth state to enter the Union.
Madison played a central role in the new Congress, while Washington was the unanimous choice as first president. He was followed by the Virginia Dynasty, including Thomas Jefferson, Madison, and James Monroe, giving the state four of the first five presidents.
The Revolution meant change and sometimes political freedom for enslaved African Americans, too. Tens of thousands of slaves from southern states, particularly in Georgia and South Carolina, escaped to British lines and freedom during the war. Thousands left with the British for resettlement in their colonies of Nova Scotia and Jamaica; others went to England; others disappeared into rural and frontier areas or the North.
Inspired by the Revolution and evangelical preachers, numerous slaveholders in the Chesapeake region manumitted some or all of their slaves, during their lifetimes or by will. From 1,800 persons in 1782, the total population of free blacks in Virginia increased to 12,766 (4.3 percent of blacks) in 1790, and to 30,570 in 1810; the percentage change was from free blacks' comprising less than one percent of the total black population in Virginia, to 7.2 percent by 1810, even as the overall population increased. One planter, Robert Carter III freed more than 450 slaves in his lifetime, more than any other planter. George Washington freed all of his slaves at his death.
Many free blacks migrated from rural areas to towns such as Petersburg, Richmond, and Charlottesville for jobs and community; others migrated with their families to the frontier where social strictures were more relaxed. Among the oldest black Baptist congregations in the nation were two founded near Petersburg before the Revolution. Each congregation moved into the city and built churches by the early 19th century.
Twice slave rebellions broke out in Virginia: Gabriel's Rebellion in 1800, and Nat Turner's Rebellion in 1831. White reaction was swift and harsh, and militias killed many innocent free blacks and black slaves as well as those directly involved in the rebellions. After the second rebellion, the legislature passed laws restricting the rights of free people of color: they were excluded from bearing arms, serving in the militia, gaining education, and assembling in groups. As bearing arms and serving in the militia were considered obligations of free citizens, free blacks came under severe constraints after Nat Turner's rebellion.
As the new nation of the United States of America experienced growing pains and began to speak of Manifest Destiny, Virginia, too, found its role in the young republic to be changing and challenging. For one, the vast lands of the Virginia Colony were subdivided into other US states and territories. In 1784, Virginia relinquished its claims to the Illinois County, Virginia, except for the Virginia Military District ( Southern Indiana). In 1775, Daniel Boone blazed a trail for the Transylvania Company from Fort Chiswell in Virginia through the Cumberland Gap into central Kentucky. This Wilderness Road became the principal route used by settlers for more than fifty years to reach Kentucky from the East. The fledgling US government rewarded veterans of the Revolutionary War with plots of land along the Ohio River in the Northwest Territory. In 1792, three western counties split off to form Kentucky.
A second influence: the lands seemed to be more fertile in the west. Virginia's heavy farming of tobacco for 200 years had depleted its soils.
The 1803 Louisiana Purchase only accelerated the westward movement of Virginians out of their native state. Many of the Virginians whose grandparents had created the Virginia Establishment began to emigrate and settle westward. Famous Virginian-born Americans affected not only the destiny of the state of Virginia, but the rapidly developing American Old West. Virginians Meriwether Lewis and William Clark were influential in their famous 1804-1806 expedition to explore the Missouri River and possible connections to the Pacific Ocean. Notable names such as Stephen F. Austin, Edwin Waller, Haden Harrison Edwards, and Dr. John Shackelford were famous Texan pioneers from Virginia. Even eventual Civil War general Robert E. Lee distinguished himself as a military leader in Texas during the 1846–48 Mexican–American War.
Historians estimate that one million Virginians left the commonwealth between the Revolution and the Civil War. With this exodus, Virginia experienced a decline in both population and political influence Prominent Virginians formed the Virginia Historical and Philosophical Society to preserve the legacy and memory of its past. At the same time, with Virginians settling so much of the west, they brought their cultural habits with them. Today, many cultural features of the American South can be attributed to Virginians who migrated west.
As the western reaches of Virginia were developed in the first half of the 19th century, the vast differences in the agricultural basis, cultural, and transportation needs of the area became a major issue for the Virginia General Assembly. In the older, eastern portion, slavery contributed to the economy. While planters were moving away from labor-intensive tobacco to mixed crops, they still held numerous slaves and their leasing out or sales was also part of their economic prospect. Slavery had become an economic institution upon which planters depended. Watersheds on most of this area eventually drained to the Atlantic Ocean. In the western reaches, families farmed smaller homesteads, mostly without enslaved or hired labor. Settlers were expanding the exploitation of resources: mining of minerals and harvesting of timber. The land drained into the Ohio River Valley, and trade followed the rivers.
Representation in the state legislature was heavily skewed in favor of the more populous eastern areas and the historic planter elite. This was compounded by the partial allowance for slaves when counting population; as neither the slaves nor women had the vote, this gave more power to white men. The legislature's efforts to mediate the disparities ended without meaningful resolution, although the state held a constitutional convention on representation issues. Thus, at the outset of the American Civil War, Virginia was caught not only in national crisis, but in a long-standing controversy within its own boundaries. While other border states had similar regional differences, Virginia had a long history of east-west tensions which finally came to a head; it was the only state to divide into two separate states during the War.
After the Revolution, various infrastructure projects began to be developed, including the Dismal Swamp Canal, the James River and Kanawha Canal, and various turnpikes. Virginia was home to the first of all Federal infrastructure projects under the new Constitution, the Cape Henry Light of 1792, located at the mouth of the Chesapeake Bay. Following the War of 1812, several Federal national defense projects were undertaken in Virginia. Drydock Number One was constructed in Portsmouth in the 1827. Across the James River, Fort Monroe was built to defend Hampton Roads, completed in 1834.
In the 1830s, railroads began to be built in Virginia. In 1831, the Chesterfield Railroad began hauling coal from the mines in Midlothian to docks at Manchester (near Richmond), powered by gravity and draft animals. The first railroad in Virginia to be powered by locomotives was the Richmond, Fredericksburg and Potomac Railroad, chartered in 1834, with the intent to connect with steamboat lines at Aquia Landing running to Washington, D.C.. Soon after, others (with equally descriptive names) followed: the Richmond and Petersburg Railroad and Louisa Railroad in 1836, the Richmond and Danville Railroad in 1847, the Orange and Alexandria Railroad in 1848, and the Richmond and York River Railroad. In 1849, the Virginia Board of Public Works established the Blue Ridge Railroad. Under Engineer Claudius Crozet, the railroad successfully crossed the Blue Ridge Mountains via the Blue Ridge Tunnel at Afton Mountain.
Petersburg became a manufacturing center, as well as a city where free black artisans and craftsmen could make a living. In 1860, half its population was black and of that, one-third were free blacks, the largest such population in the state.
With extensive iron deposits, especially in the western counties, Virginia was a pioneer in the iron industry. The first ironworks in the new world was established at Falling Creek in 1619, though it was destroyed in 1622. There would eventually grow to be 80 ironworks, charcoal furnaces and forges with 7,000 hands at any one time, about 70 percent of them slaves. Ironmasters hired slaves from local slave owners because they were cheaper than white workers, easier to control, and could not switch to a better employer. But the work ethic was weak, because the wages went to the owner, not to the workers, who were forced to work hard, were poorly fed and clothed, and were separated from their families. Virginia's industry increasingly fell behind Pennsylvania, New Jersey and Ohio, which relied on free labor. Bradford (1959) recounts the many complaints about slave laborers and argues the over-reliance upon slaves contributed to the failure of the ironmasters to adopt improved methods of production for fear the slaves would sabotage them. Most of the blacks were unskilled manual laborers, although Lewis (1977) reports that some were in skilled positions.
Virginia at first refused to join the Confederacy, but did so after President Lincoln on April 15 called for troops from all states; that meant Federal troops crossing Virginia on the way south to subdue South Carolina. On April 17, 1861 the convention voted to secede, and voters ratified the decision on May 23. Immediately the Union army moved into northern Virginia and captured Alexandria without a fight, and controlled it for the remainder of the war. The Wheeling area had opposed secession and remained strong for the Union.
Because of its strategic significance, the Confederacy relocated its capital to Richmond. Richmond was at the end of a long supply line and as the highly symbolic capital of the Confederacy became the main target of round after round of invasion attempts. A major center of iron production during the civil war was located in Richmond at Tredegar Iron Works, which produced most of the artillery for the war. The city was the site of numerous army hospitals. Libby Prison for captured Union officers gained an infamous reputation for the overcrowded and harsh conditions, with a high death rate. Richmond's main defenses were trenches built surrounding it down towards the nearby city of Petersburg. Saltville was a primary source of Confederate salt (critical for food preservation) during the war, leading to the two Battles of Saltville.
The first major battle of the Civil War occurred on July 21, 1861. Union forces attempted to take control of the railroad junction at Manassas, but the Confederate Army reached it first and won the First Battle of Manassas (known as "Bull Run" in Northern naming convention). Both sides mobilized for war; the year 1861 went on without another major fight.
Men from all economic and social levels, both slaveholders and nonslaveholders, as well as former Unionists, enlisted in great numbers on both sides. Areas, especially in the west and along the border, that sent few men to the Confederacy were characterized by few slaves, poor economies, and a history of reinal antagonism to the Tidewater.
The western counties could not tolerate the Confederacy. Breaking away, they first formed the Union state of Virginia (recognized by Washington); it is called the Restored government of Virginia and was based in Alexandria, across the river from Washington. The Restored government did little except give its permission for Congress to form the new state of West Virginia in 1862. From May to August 1861, a series of Unionist conventions met in Wheeling; the Second Wheeling Convention constituted itself as a legislative body called the Restored Government of Virginia. It declared Virginia was still in the Union but that the state offices were vacant and elected a new governor, Francis H. Pierpont; this body gained formal recognition by the Lincoln administration on July 4. On August 20 the Wheeling body passed an ordinance for the creation; it was put to public vote on Oct. 24. The vote was in favor of a new state—West Virginia—which was distinct from the Pierpont government, which persisted until the end of the war. Congress and Lincoln approved, and, after providing for gradual emancipation of slaves in the new state constitution, West Virginia became the 35th state on June 20, 1863. In effect there were now three states: the Confederate Virginia, the Union Restored Virginia, and West Virginia.
The state and national governments in Richmond did not recognize the new state, and Confederates did not vote there. The Confederate government in Richmond sent in Robert E. Lee. But Lee found little local support and was defeated by Union forces from Ohio. Union victories in 1861 drove the Confederate forces out of the Monongahela and Kanawha valleys, and throughout the remainder of the war the Union held the region west of the Alleghenies and controlled the Baltimore and Ohio Railroad in the north. The new state was not subject to Reconstruction.
For the remainder of the war, many major battles were fought across Virginia, including the Seven Days Battles, the Battle of Fredericksburg, the Battle of Chancellorsville, the Battle of Brandy Station
Over the course of the War, despite occasional tactical victories and spectacular counter-stroke raids, Confederate control of many regions of Virginia was gradually lost to Federal advance. By October 1862 the northern 9th and 10th Congressional districts along the Potomac were under Union control. Eastern Shore, Northern, Middle and Lower Peninsula and the 2nd congressional district surrounding Norfolk west to Suffolk were permanently Union-occupied by May. Other regions, such as the Piedmont and Shenandoah Valley, regularly changed hands through numerous campaigns.
In 1864, the Union Army planned to attack Richmond by a direct overland approach through Overland Campaign and the Battle of the Wilderness, culminating in the Siege of Petersburg which lasted from the summer of 1864 to April 1865. By November 6, 1864, Confederate forces controlled only four of Virginia's 16 congressional districts in the region of Richmond-Petersburg and their Southside counties.
In April 1865, Richmond was burned by a retreating Confederate Army ; Lincoln walked the city streets to cheering crowds of newly freed blacks. The Confederate government fled south, pausing in Danville for a few days. The end came when Lee surrendered to Ulysses Grant at Appomattox on April 9, 1865.
Virginia had been devastated by the war, with the infrastructure (such as railroads) in ruins; many plantations burned out; and large numbers of refugees without jobs, food or supplies beyond rations provided by the Union Army, especially its Freedmen's Bureau.
Historian Mary Farmer-Kaiser reports that white landowners complained to the Bureau about unwillingness of freedwomen to work in the fields as evidence of their laziness, and asked the Bureau to force them to sign labor contracts. In response, many Bureau officials "readily condemned the withdrawal of freedwomen from the work force as well as the 'hen pecked' husbands who allowed it." While the Bureau did not force freedwomen to work, it did force freedmen to work or be arrested as vagrants. Furthermore, agents urged poor unmarried mothers to give their older children up as apprentices to work for white masters. Farmer-Kaiser concludes that "Freedwomen found both an ally and an enemy in the bureau."
There were three phases in Virginia's Reconstruction era: wartime, presidential, and congressional. Immediately after the war President Andrew Johnson recognized the Francis Harrison Pierpont government as legitimate and restored local government. The Virginia legislature passed Black Codes that severely restricted Freedmen's mobility and rights; they had only limited rights and were not considered citizens, nor could they vote. The state ratified the 13th amendment to abolish slavery and revoked the 1861 ordnance of secession. Johnson was satisfied that Reconstruction was complete.
Other Republicans in Congress refused to seat the newly elected state delegation; the Radicals wanted better evidence that slavery and similar methods of serfdom had been abolished, and the freedmen given rights of citizens. They also were concerned that Virginia leaders had not renounced Confederate nationalism. After winning large majorities in the 1866 national election, the Radical Republicans gained power in Congress. They put Virginia (and nine other ex-Confederate states) under military rule. Virginia was administered as the " First Military District" in 1867–69 under General John Schofield Meanwhile, the Freedmen became politically active by joining the pro-Republican Union League, holding conventions, and demanding universal male suffrage and equal treatment under the law, as well as demanding disfranchisement of ex-Confederates and the seizure of their plantations. McDonough, finding that Schofield was criticized by conservative whites for supporting the Radical cause on the one hand, and attacked on the other by Radicals for thinking black suffrage was premature on the other, concludes that "he performed admirably' by following a middle course between extremes.
Increasingly a deep split opened up in the republican ranks. The moderate element had national support and called itself "True Republicans." The more radical element set out to disfranchise whites—such as not allowing a man to hold office if he was a private in the Confederate army, or had sold food to the Confederate government, plus land reform. About 20,000 former Confederates were denied the right to vote in the 1867 election. In 1867, radical James Hunnicutt (1814–1880), a white preacher, editor and Scalawag (white Southerners supporting Reconstruction) mobilized the black Republican vote by calling for the confiscation of all plantations and turning the land over to Freedmen and poor whites. The "True Republicans" (the moderates), led by former Whigs, businessmen and planters, while supportive of black suffrage, drew the line at property confiscation. A compromise was reached calling for confiscation if the planters tried to intimidate black voters. Hunnicutt's coalition took control of the Republican Party, and began to demand the permanent disfranchisement of all whites who had supported the Confederacy. The Virginia Republican party became permanently split, and many moderate Republicans switched to the opposition "Conservatives". The Radicals won the 1867 election for delegates to a constitutional convention.
The 1868 constitutional convention included 33 white Conservatives, and 72 Radicals (of whom 24 were Blacks, 23 Scalawag, and 21 Carpetbaggers. Called the "Underwood Constitution" after the presiding officer, the main accomplishment was to reform the tax system, and create a system of free public schools for the first time in Virginia. After heated debates over disfranchising Confederates, the Virginia legislature approved a Constitution that excluded ex-Confederates from holding office, but allowed them to vote in state and federal elections.
Under pressure from national Republicans to be more moderate, General Schofield continued to administer the state through the Army. He appointed a personal friend, Henry H. Wells as provisional governor. Wells was a Carpetbagger and a former Union general. Schofield and Wells fought and defeated Hunnicutt and the Scalawag Republicans. They took away contracts for state printing orders from Hunnicutt's newspaper. The national government ordered elections in 1869 that included a vote on the new Underwood constitution, a separate one on its two disfranchisement clauses that would have permanently stripped the vote from most former rebels, and a separate vote for state officials. The Army enrolled the Freedmen (ex-slaves) as voters but would not allow some 20,000 prominent whites to vote or hold office. The Republicans nominated Wells for governor, as Hunnicutt and most Scalawags went over to the opposition.
The leader of the moderate Republicans, calling themselves "True Republicans," was William Mahone (1826–1895), a railroad president and former Confederate general. He formed a coalition of white Scalawag Republicans, some blacks, and ex-Democrats who formed the Conservative Party. Mahone recommended that whites had to accept the results of the war, including civil rights and the vote for Freedmen. Mahone convinced the Conservative Party to drop its own candidate and endorse Gilbert C. Walker, Mahone's candidate for governor. In return, Mahone's people endorsed Conservatives for the legislative races. Mahone's plan worked, as the voters in 1869 elected Walker and defeated the proposed disfranchisement of ex-Confederates.
When the new legislature ratified the 14th and 15th amendments to the U.S. Constitution, Congress seated its delegation, and Virginia Reconstruction came to an end in January 1870. The Radical Republicans had been ousted in a non-violent election. Virginia was the only southern state that did not elect a civilian government that represented more Radical Republican principles. Suffering from widespread destruction and difficulties in adapting to free labor, white Virginians generally came to share the postwar bitterness typical of the southern attitudes. Historian Richard Lowe argues that the obstacles faced by the Radical Republican movement made their cause hopeless:
- even more damaging to Republicans' prospects than their poverty, their inexperience in state politics, their isolation from potential allies, and their identification with the heated North was the perverse and powerful racism that ran so powerfully through the white community. The great majority of the Old Dominion's white citizens could not take seriously a political party composed primarily of former slaves.
In addition to those that were rebuilt, new railroads developed after the Civil War. In 1868, under railroad baron Collis P. Huntington, the Virginia Central Railroad was merged and transformed into the Chesapeake and Ohio Railroad. In 1870, several railroads were merged to form the Atlantic, Mississippi and Ohio Railroad, later renamed Norfolk & Western. In 1880, the towpath of the now-defunct James River & Kanawha canal was transformed into the Richmond and Allegheny Railroad, which within a decade would merge into the Chesapeake & Ohio. Others would include the Southern Railroad, the Seaboard Air Line, and the Atlantic Coast Line; still others would eventually reach into Virginia, including the Baltimore & Ohio and the Pennsylvania Railroad. The rebuilt Richmond, Fredericksburg, and Potomac Railroad eventually was linked to Washington, D.C..
In the 1880s, the Pocahontas Coalfield opened up in far southwest Virginia, with others to follow, in turn providing more demand for railroads transportation. In 1909, the Virginian Railway opened, built for the express purpose of hauling coal from the mountains of West Virginia to the ports at Hampton Roads. The growth of railroads resulted in the creation of new towns and rapid growth of others, including Clifton Forge, Roanoke, Crewe and Victoria. The railroad boom was not without incident: the Wreck of the Old 97 occurred just north of Danville, Virginia in 1903, later immortalized by a popular ballad.
With the invention of the cigarette rolling machine, and the great increase in smoking in the early 20th century, cigarettes and other tobacco products became a major industry in Richmond and Petersburg. Tobacco magnates such as Lewis Ginter funded a number of public institutions.
A division among Virginia politicians occurred in the 1870s, when those who supported a reduction of Virginia's pre-war debt ("Readjusters") opposed those who felt Virginia should repay its entire debt plus interest ("Funders"). Virginia's pre-war debt was primarily for infrastructure improvements overseen by the Virginia Board of Public Works, much of which were destroyed during the war or in the new State of West Virginia.
After his unsuccessful bid for the Democratic nomination for governor in 1877, former confederate General and railroad executive William Mahone became the leader of the "Readjusters", forming a coalition of conservative Democrats and white and black Republicans. The so-called Readjusters aspired "to break the power of wealth and established privilege" and to promote public education. The party promised to "readjust" the state debt in order to protect funding for newly established public education, and allocate a fair share to the new State of West Virginia. Its proposal to repeal the poll tax and increase funding for schools and other public facilities attracted biracial and cross-party support.
The Readjuster Party was successful in electing its candidate, William E. Cameron as governor, and he served from 1882 to 1886. Mahone served as a Senator in the U.S. Congress from 1881 to 1887, as well as fellow Readjustor Harrison H. Riddleberger, who served in the U.S. Senate from 1883 to 1889. Readjusters' effective control of Virginia politics lasted until 1883, when they lost majority control in the state legislature, followed by the election of Democrat Fitzhugh Lee as governor in 1885. The Virginia legislature replaced both Mahone and Riddleberger in the U.S. Senate with Democrats.
In 1888, the exception to Readjustor and Democratic control was John Mercer Langston, who was elected to Congress from the Petersburg area on the Republican ticket. He was the first black elected to Congress from the state, and the last for nearly a century. He served one term. A talented and vigorous politician, he was an Oberlin College graduate. He had long been active in the abolitionist cause in Ohio before the Civil War, had been president of the National Equal Rights League from 1864 to 1868, and had headed and created the law department at Howard University, and acted as president of the college. When elected, he was president of what became Virginia State University.
While the Readjustor Party faded, the goal of public education remained strong, with institutions established for the education of schoolteachers. In 1884, the state acquired a bankrupt women's college at Farmville and opened it as a normal school. Growth of public education led to the need for additional teachers. In 1908, two additional normal schools were established, one at Fredericksburg and one at Harrisonburg, and in 1910, one at Radford.
After the Readjuster Party disappeared, Virginia Democrats rapidly passed legislation and constitutional amendments that effectively disfranchised African Americans and many poor whites, through the use of poll taxes and literacy tests. They created white, one-party rule under the Democratic Party for the next 80 years. White state legislators passed statutes that restored white supremacy through imposition of Jim Crow segregation. In 1902, Virginia passed a new constitution that reduced voter registration.
The Progressive Era after 1900 brought numerous reforms, designed to modernize the state, increase efficiency, apply scientific methods, promote education and eliminate waste and corruption.
A key leader was Governor Claude Swanson (1906–10), a Democrat who left machine politics behind to win office using the new primary law. Swanson's coalition of reformers in the legislature, built schools and highways, raised teacher salaries and standards, promoted the state's public health programs, and increased funding for prisons. Swanson fought against child labor, lowered railroad rates and raised corporate taxes, while systematizing state services and introducing modern management techniques. The state funded a growing network of roads, with much of the work done by black convicts in chain gangs. After Swanson moved to the U.S. Senate in 1910 he promoted Progressivism at the national level as a supporter of President Woodrow Wilson, who had been born in Virginia and was considered a native son. Swanson, as a power on naval affairs, promoted the Norfolk Navy Yard and Newport News Ship Building and Drydock Corporation. Swanson's statewide organization evolved into the "Byrd Organization."
The State Corporation Commission (SCC) was formed as part of the 1902 Constitution, over the opposition of the railroads, to regulate railroad policies and rates. The SCC was independent of parties, courts, and big businesses, and was designed to maximize the public interest. It became an effective agency, which especially pleased local merchants by keeping rates low.
Virginia has a long history of agricultural reformers, and the Progressive Era stimulated their efforts. Rural areas suffered persistent problems, such as declining populations, widespread illiteracy, poor farming techniques, and debilitating diseases among both farm animals and farm families. Reformers emphasized the need to upgrade the quality of elementary education. With federal help, in they set up a county agent system (today the Virginia Cooperative Extension) that taught farmers the latest scientific methods for dealing with tobacco and other crops, and farm house wives how to maximize their efficiency in the kitchen and nursery.
Some upper-class women, typified by Lila Meade Valentine of Richmond, promoted numerous Progressive reforms, including kindergartens, teacher education, visiting nurses programs, and vocational education for both races. Middle-class white women were especially active in the Prohibition movement. The woman suffrage movement became entangled in racial issues—whites were reluctant to allow black women the vote—and was unable to broaden its base beyond middle-class whites. Virginia women got the vote in 1920, the result of a national constitutional amendment.
In higher education, the key leader was Edwin A. Alderman, president of the University of Virginia, 1904–31. His goal was the transformation of the southern university into a force for state service and intellectual leadership. and educational utility. Alderman successfully professionalized and modernized the state's system of higher education. He promoted international standards of scholarship, and a statewide network of extension services. Joined by other college presidents, he promoted the Virginia Education Commission, created in 1910. Alderman's crusade encountered some resistance from traditionalists, and never challenged the Jim Crow system of segregated schooling.
While the progressives were modernizers, there was also a surge of interest in Virginia traditions and heritage, especially among the aristocratic First Families of Virginia (FFV). The Association for the Preservation of Virginia Antiquities (APVA), founded in Williamsburg in 1889, emphasized patriotism in the name of Virginia's 18th-century Founding Fathers. In 1907, the Jamestown Exposition was held near Norfolk to celebrate the tricentennial of the arrival of the first English colonists and the founding of Jamestown.
Attended by numerous federal dignitaries, and serving as the launch point for the Great White Fleet, the Jamestown Exposition also spurred interest in the military potential of the area. The site of the exposition would later become, in 1917, the location of the Norfolk Naval Station. The proximity to Washington, D.C., the moderate climate, and strategic location of a large harbor at the center of the Atlantic seaboard made Virginia a key location during World War I for new military installations. These included Fort Story, the Army Signal Corps station at Langley, Quantico Marine Base in Prince William County, Fort Belvoir in Fairfax County, Fort Lee near Petersburg and Fort Eustis, in Warwick County (now Newport News). At the same time, heavy shipping traffic made the area a target for U-boats, and a number of merchant vessels were attacked or sunk off the Virginia coast.
This section needs expansion. You can help by adding to it. (November 2009)
Temperance became an issue in the early 20th century. In 1916, a statewide referendum passed to outlaw the consumption of alcohol. This was overturned in 1933.
After 1930, tourism began to grow with the development of Colonial Williamsburg.
Shenandoah National Park was constructed from newly gathered land, as well as the Blue Ridge Parkway and Skyline Drive. The Civilian Conservation Corps played a major role in developing that National Park, as well as Pocahontas State Park. By 1940, new highway bridges crossed the lower Potomac, Rappahannock, York, and James Rivers, bringing to an end the long-distance steamboat service which had long served as primary transportation throughout the Chesapeake Bay area. Ferryboats remain today in only a few places.
Blacks comprised a third of the population but lost nearly all their political power. The electorate was so small that from 1905 to 1948 government employees and officeholders cast a third of the votes in state elections. This small, controllable electorate facilitated the formation of a powerful statewide political machine by Harry Byrd (1887–1966), which dominated from the 1920s to the 1960s. Most of the blacks who remained politically active supported the Byrd organization, which in turn protected their right to vote, making Virginia's race relations the most harmonious in the South before the 1950s, according to V.O. Key. Not until Federal civil rights legislation was passed in 1964 and 1965 did African Americans recover the power to vote and the protection of other basic constitutional civil rights.
The economic stimulus of the World War brought full employment for workers, high wages, and high profits for farmers. It brought in many thousands of soldiers and sailors for training. Virginia sent 300,000 men and 4,000 women to the services. The buildup for the war greatly increased the state's naval and industrial economic base, as did the growth of federal government jobs in Northern Virginia and adjacent Washington, DC. The Pentagon was built in Arlington as the largest office building in the world. Additional installations were added: in 1941, Fort A.P. Hill and Fort Pickett opened, and Fort Lee was reactivated. The Newport News shipyard expanded its labor force from 17,000 to 70,000 in 1943, while the Radford Arsenal had 22,000 workers making explosives. Turnover was very high—in one three-month period the Newport News shipyard hired 8400 new workers as 8,300 others quit.
In addition to general postwar growth, the Cold War resulted in further growth in both Northern Virginia and Hampton Roads. With the Pentagon already established in Arlington, the newly formed Central Intelligence Agency located its headquarters further afield at Langley (unrelated to the Air Force Base). In the early 1960s, the new Dulles International Airport was built, straddling the Fairfax County-Loudoun County border. Other sites in Northern Virginia included the listening station at Vint Hill. Due to the presence of the U.S. Atlantic Fleet in Norfolk, in 1952 the Allied Command Atlantic of NATO was headquartered there, where it remained for the duration of the Cold War. Later in the 1950s and across the river, Newport News Shipbuilding would begin construction of the USS Enterprise—the world's first nuclear-powered aircraft carrier—and the subsequent atomic carrier fleet.
Virginia also witnessed American efforts in the Space Race. When the National Advisory Committee for Aeronautics was transformed into the National Aeronautics and Space Administration in 1958, the resulting Space Task Group headquartered at the laboratories of Langley Research Center. From there, it would initiate Project Mercury, and would remain the headquarters of the U.S. manned spaceflight program until its transfer to Houston in 1962. On the Eastern Shore, near Chincoteague, Wallops Flight Facility served as a rocket launch site, including the launch of Little Joe 2 on December 4, 1959, which sent a rhesus monkey, Sam, into suborbital spaceflight. Langley later oversaw the Viking program to Mars.
The new U.S. Interstate highway system begun in the 1950s and the new Hampton Roads Bridge-Tunnel in 1958 helped transform Virginia Beach from a tiny resort town into one of the state's largest cities by 1963, and spurring the growth of the Hampton Roads region linked by the Hampton Roads Beltway. In the western portion of the state, completion of north-south Interstate 81 brought better access and new businesses to dozens of counties over a distance of 300 miles (480 km) as well as facilitating travel by students at the many Shenandoah area colleges and universities. The creation of Smith Mountain Lake, Lake Anna, Claytor Lake, Lake Gaston, and Buggs Island Lake, by damming rivers, attracted many retirees and vacationers to those rural areas. As the century drew to a close, Virginia tobacco growing gradually declined due to health concerns, although not at steeply as in Southern Maryland. A state community college system brought affordable higher education within commuting distance of most Virginians, including those in remote, underserved localities. Other new institutions were founded, most notably George Mason University and Liberty University. Localities such as Danville and Martinsville suffered greatly as their manufacturing industries closed.[ citation needed]
In 1944, Irene Morgan refused to give up her seat on an interstate bus and was arrested in Middlesex County, Virginia pursuant to Virginia's segregation laws. Morgan appealed her case up to the Supreme Court and, in 1946, won Irene Morgan v. Commonwealth of Virginia, which struck down segregation interstate buses. Virginia continued to enforce interstate bus segregation, and in 1947, activists organized a series of integrated rides, the Journey of Reconciliation, through Virginia and other states of the Upper South in an act of civil disobedience against Virginia's defiance of the Supreme Court's ruling. Another Supreme Court ruling involving Virginia, Boynton v. Virginia, desegregated interstate bus terminals.Morgan, Boynton, and the Journey of Reconciliation inspired the 1961 Freedom Rides that fought bus segregation in the Deep South. Along with the bus desegregation cases, Virginia was a contestant in the Supreme Court ruling that invalidated laws prohibiting interracial marriage, Loving v. Virginia.
The state government orchestrated systematic resistance to federal court orders requiring the end of segregation. The state legislature even enacted a package of laws, known as the Stanley plan, to try to evade racial integration in public schools. Prince Edward County even closed all its public schools in an attempt to avoid racial integration, but relented in the face of U.S. Supreme Court rulings. The first black students attended the University of Virginia School of Law in 1950, and Virginia Tech in 1953. In 2008, various actions of the Civil Rights Movement were commemorated by the Virginia Civil Rights Memorial in Richmond.
By the 1980s, Northern Virginia and the Hampton Roads region had achieved the greatest growth and prosperity, chiefly because of employment related to Federal government agencies and defense, as well as an increase in technology in Northern Virginia. Shipping through the Port of Hampton Roads began expansion which continued into the early 21st century as new container facilities were opened. Coal piers in Newport News and Norfolk had recorded major gains in export shipments by August 2008. The recent expansion of government programs in the areas near Washington has profoundly affected the economy of Northern Virginia whose population has experienced large growth and great ethnic/ cultural diversification, exemplified by communities such as Tysons Corner, Reston and dense, urban Arlington. The subsequent growth of defense projects has also generated a local information technology industry. In recent years, intolerably heavy commuter traffic and the urgent need for both road and rail transportation improvements have been a major issue in Northern Virginia. The Hampton Roads region has also experienced much growth, as have the western suburbs of Richmond in both Henrico and Chesterfield Counties.
Virginia served as a major center for information technology during the early days of the Internet and network communication. Internet and other communications companies clustered in the Dulles Corridor. By 1993, the Washington area had the largest amount of Internet backbone and the highest concentration of Internet service providers. In 2000, more than half of all Internet traffic flowed along the Dulles Toll Road, and by 2016 70% of the world's internet traffic flowed through Loudoun County. Bill von Meister founded two Virginia companies that played major roles in the commercialization of the Internet: McLean, Virginia based The Source and Control Video Corporation, forerunner of America Online. While short-lived, The Source was one of the first online service providers alongside CompuServe. On hand for the launch of The Source, Isaac Asimov remarked "This is the beginning of the information age." The Source helped pave the way for future online service providers including another Virginia company founded by von Meister, America Online (AOL). AOL became the largest provider of Internet access during the Dial-up era of Internet access. AOL maintained a Virginia headquarters until the then-struggling company moved in 2007.
In 2006, former Governor of Virginia Mark Warner gave a speech and interview in the massively multiplayer online game Second Life, becoming the first politician to appear in a video game. In 2007, Virginia speedily passed the nation's first spaceflight act by a vote of 99–0 in the House of Delegates. Northern Virginia company Space Adventures is currently the only company in the world offering space tourism. In 2008, Virginia became the first state to pass legislation on Internet safety, with mandatory educational courses for 11- to 16-year-olds.
In 2013, by a slight margin in the Virginia Governor's race, the state of Virginia broke a long acclaimed streak of choosing a governor against the incumbent party within the White House. For the first time in more than thirty years will the Governor and the President be from the same party.
Stamps of Virginia events and landmarks include
• Jamestown founding
• Mount Vernon
• Stratford Hall
- Colonial South and the Chesapeake
- Colony of Virginia
- Constitution of Virginia
- Former counties, cities, and towns of Virginia
- History of Richmond, Virginia, the current state capital
- History of the East Coast of the United States
- History of the Southern United States
- History of Virginia on stamps
- Newspapers in Virginia in the 18th century, List of
- Timeline of Virginia
- Virginia Conventions
- "digge upp deade corpes outt of graves and to eate them; from google (virginia cannibal) result 3".
- Charles H. Ambler and Festus P. Summers, West Virginia, the mountain state (1958) pp 48-52, 55
- "Archaeological evidence also indicates that Native Americans occupied the area as early as 6500 BC." "State Historical Highway Marker 'Pocahontas Island' To Be Dedicated in Petersburg", Petersburg, VA Official Website, Posted on: June 16, 2015, archived article accessed February 25, 2016
- Brown, Hutch (Summer 2000). "Wildland Burning by American Indians in Virginia". Fire Management Today. Washington, DC: U.S. Department of Agriculture, Forest Service. 60 (3): 32. An engraving after John White watercolor. Sparsely wooded field in background suggests the region's savanna.
- Virginia Indian Tribes, University of Richmond Archived March 9, 2005, at the Wayback Machine.
- Clarence Walworth Alvord and Lee Bidgood, 1912.
- c.f. Anishinaabe language: danakamigaa: "activity-grounds", i.e. "land of much events [for the People[ permanent dead link]"
- Edward Bland, The Discoverie of New Brittaine
- "The Shawnee Tribe & War of 1812". SchoolworkHelper.net. Retrieved August 17, 2017.
- Wood, Karenne (editor). The Virginia Indian Heritage Trail, 2007.
- Pritzker 441
- Hodge, F. W. (1910). The Handbook of American Indians North of Mexico. Washington, D.C.: Government Printing Office.
- Mooney, James, The Siouan Tribes of the Southeast. Smithsonian Institution. Washington, D.C., Government Printing Office, 1894.
- Brashler 1987
- Kent 2001
- Hale, Horatio "Tutelo Tribe & Language" (1883)
- Owen-Dorsey, James & Swanton, John R. "A Dictionary of Biloxi & Ofo" (1912)
- Speck, Frank G. "Catawba Texts" (1969)
- Collins, Scott Preston "Saponi History"
- Jerald T. Milanich (February 10, 2006). Laboring in the Fields of the Lord: Spanish Missions And Southeastern Indians. University Press of Florida. p. 169. ISBN 978-0-8130-2966-5. Retrieved June 25, 2012.
- "Discoveries of John Lederer," reprinted by O.H. Harpel, Cincinnati (1879)
- Batt's "Journal & Relation of a New Discovery" N.Y. Hist. Col. Vol. III, p. 191 (1671)
- "Lambreville to Bruyas Nov. 4,1696" N.Y. Hist. Col. Vol. III, p. 484
- Lawson's "History of Carolina" reprinted by Stroller & Marcom. Raleigh, 1860, p. 384
- N.Y. Hist. Col. Vol. V, p. 633
- "Life of Brainerd" p. 167
- N.Y. Hist. Col. Vol. VI, p. 811
- Mooney, J. (1894). The Siouan Tribes of the East. Washington, D.C.: Government Printing Office.
- "Coharie Tribe". Coharie Tribe. Coharie Tribe. Retrieved 27 January 2017.
- "EARLY INDIAN MIGRATION IN OHIO". GenealogyTrails.com. Retrieved August 17, 2017.
- Cheves, L. "Shaftesbury Papers." Col. of the South Carolina Historical Society 5, Richmond: William Ellis Jones
- Cheves, L. "Shaftesbury Papers." Col. of the South Carolina Historical Society 5, Richmond: William Ellis Jones
- "Yuchi Language Primer" (2007) Yuchi.org
- cherokeelessons.com/pdf/Cherokee Lessons 978-0-557-68640-7.pdf
- Oatis, Steven J. "A Colonial Complex: South Carolina's Frontiers in the Era of the Yamasee War, (1680-1730)
- Oatis, A Colonial Complex
- Charles Augustus Hanna, 1911 The Wilderness Trail, Vol II, 1911, pp. 93–95.
- Ethridge, Robbie (2003). "Chapter 5: "The People of Creek Country"". Creek Country: The Creek Indians and their World. The University of North Carolina Press. p. 93. ISBN 0-8078-5495-6.
- Berrier Jr., Ralph (September 20, 2009). "The slaughter at Saltville". The Roanoke Times. Archived from the original on September 11, 2012. Retrieved October 9, 2011.
- "Virginia Memory: Virginia Chronology". Library of Virginia. Retrieved October 9, 2011.
- James O. Glanville (2004). Conquistadors at Saltville in 1567?: A Review of the Archeological and Documentary Evidence. Smithfield Review.
- "A" New Andalucia and a Way to the Orient: The American Southeast During the Sixteenth Century. LSU Press. 1 October 2004. pp. 182–184. ISBN 978-0-8071-3028-5. Retrieved 30 March 2013.
- Stephen Adams (2001), The best and worst country in the world: perspectives on the early Virginia landscape, University of Virginia Press, p. 61, ISBN 978-0-8139-2038-2
- Charles M. Hudson; Carmen Chaves Tesser (1994). The Forgotten Centuries: Indians and Europeans in the American South, 1521-1704. University of Georgia Press. p. 359. ISBN 978-0-8203-1654-3.
- Jerald T. Milanich (February 10, 2006). Laboring in the Fields of the Lord: Spanish Missions And Southeastern Indians. University Press of Florida. p. 92. ISBN 978-0-8130-2966-5. Retrieved June 30, 2012.
- Seth Mallios (August 28, 2006). The Deadly Politics of Giving: Exchange And Violence at Ajacan, Roanoke, And Jamestown. University of Alabama Press. pp. 39–43. ISBN 978-0-8173-5336-0. Retrieved June 30, 2012.
- Price, 11
- Thomas C. Parramore; Peter C. Stewart; Tommy L. Bogger (April 1, 2000). Norfolk: The First Four Centuries. University of Virginia Press. p. 12. ISBN 978-0-8139-1988-1. Retrieved March 18, 2012.
- MR Peter C Mancall (2007). The Atlantic World and Virginia, 1550-1624. UNC Press Books. pp. 517, 522. ISBN 978-0-8078-3159-5. Retrieved 17 February 2013.
- Three names from the Roanoke Colony are still in use, all based on Native American names. Stewart, George (1945). Names on the Land: A Historical Account of Place-Naming in the United States. New York: Random House. p. 22. ISBN 1-59017-273-6.
- Raleigh, History of the World: "For when some of my people asked the name of that country, one of the savages answered 'Win-gan-da-coa', which is as much as to say, 'You wear good clothes.'
- T. H. Breen, "Looking Out for Number One: Conflicting Cultural Values in Early Seventeenth-Century Virginia," South Atlantic Quarterly, Summer 1979, Vol. 78 Issue 3, pp. 342–360
- J. Frederick Fausz, "The 'Barbarous Massacre' Reconsidered: The Powhatan Uprising of 1622 and the Historians," Explorations in Ethnic Studies, vol 1 (Jan. 1978), 16–36
- Gleach p. 199
- John Esten Cooke, Virginia: A History of the People (1883) p. 205.
- Heinemann, Ronald L., et al., Old Dominion, New Commonwealth: a history of Virginia 1607-2007, U. Virginia Press 2007 ISBN 978-0-8139-2609-4, p.44-45
- Wilcomb E. Washburn, The Governor and the Rebel: A History of Bacon's Rebellion in Virginia (1957)
- Albert H. Tillson (1991). Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740-1789. UP of Kentucky. p. 20ff.
- Alan Taylor, American Colonies: The Settling of North America (2002) p 157.
- John E. Selby, The Revolution in Virginia, 1775-1783 (1988) p 24-25.
- Quoted in Nancy L. Struna, "The Formalizing of Sport and the Formation of an Elite: The Chesapeake Gentry, 1650-1720s." Journal of Sport History 13#3 (1986) p 219. online
- Struna, The Formalizing of Sport and the Formation of an Elite pp 212-16.
- Timothy H. Breen, "Horses and gentlemen: The cultural significance of gambling among the gentry of Virginia." William and Mary Quarterly (1977) 34#2 pp: 239-257. online
- Edmund Morgan, American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) p 386
- Heinemann, Old Dominion, New Commonwealth (2007) 83–90
- Gene Wilhelm, Jr., "Folk Culture History of the Blue Ridge Mountains" Appalachian Journal (1975) 2#3 in JSTOR
- Delma R. Carpenter, "The Route Followed by Governor Spotswood in 1716 across the Blue Ridge Mountains." Virginia Magazine of History and Biography (1965): 405-412. in JSTOR
- Rob Sherwood, "Germanna's Treasure Trove of History: A Journey of Discovery." Inquiry 13.1 (2008): 45-55. online
- "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). Virginiadot.org. Retrieved 2015-04-16.
- "The Route of the Three Notch'd Road : A Preliminary Report" (PDF). 3chopt.com. Retrieved 2015-04-16.
- Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" online
- Encyclopedia Virginia article: "Backcountry Frontier of Colonial Virginia" http://www.encyclopediavirginia.org/Backcountry_Frontier_of_Colonial_Virginia#start_entry
- http://www.virginiaplaces.org/settleland/fairfaxgrant.html Once colonial settlement moved upstream of the Fall Line into the Piedmont, the dispute over the inland edge of the Northern Neck grant became an issue. Settlers seeking clear title had to know whether to file paperwork and pay fees to the colonial government in Williamsburg or the land office of the Fairfax family. If the colony could extinguish the Northern Neck grant somehow, revenues would flow to Williamsburg rather than to Leeds Castle."
- http://www.historichampshire.org/research/searching1.htm "in mid-March, 1735, Lord Fairfax arrived in Virginia on board the Glasgow on his first inspection trip to America. The trip lasted over two years during which time Fairfax reasserted his claim to the Proprietary and made arrangements for the survey of the boundaries."
- http://www.mountvernon.org/digital-encyclopedia/article/lord-fairfax/ "in 1748 hired, among others, the sixteen-year old Washington to survey the Northern Neck."
- George Washington's elder half brother Lawrence Washington (1718-1752) was married to Anne (1728-1761) a daughter of Col. William Fairfax of Belvoir—a land agent and cousin of Lord Thomas Fairfax. Anne's brother, George William Fairfax, was married to Sally Fairfax (nee Cary).
- Historical Statement Relative to the Town of Winchester the Virginia -- House of Burgesses granted the fourth city charter in Virginia to 'Winchester' as Frederick Town was renamed.
- MacCorkle, William Alexander. "The historical and other relations of Pittsburgh and the Virginias". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013.
- Andrew Arnold Lambing; et al. "Allegheny County: its early history and subsequent development: from the earliest period till 1790". Historic Pittsburgh Text Collection. University of Pittsburgh. Retrieved 12 September 2013.
- "Addresses delivered at the celebration of the one hundred and fiftieth anniversary of the Battle of Bushy Run, August 5th and 6th, 1913". Historic Pittsburgh General Text Collection. University of Pittsburgh. Retrieved 16 September 2013.
- O'Meara, p. 48
- Anderson (2000), pp. 42–43
- "Royal Proclamation I". Archived from the original on October 20, 2013. Retrieved May 30, 2013.
- Gordon S. Wood, The American Revolution, A History. New York, Modern Library, 2002 ISBN 0-8129-7041-1, p.22
- Edward L. Bond and Joan R. Gundersen, The Episcopal Church in Virginia, 1607–2007 (2007)
- Rountree p. 161–162, 168–170, 175
- Edward L. Bond, "Anglican theology and devotion in James Blair's Virginia, 1685–1743," Virginia Magazine of History and Biography, (1996) 104#3 pp 313–40
- Charles Woodmason, The Carolina Backcountry on the Eve of the Revolution: The Journal and Other Writings of Charles Woodmason, Anglican Itinerant ed. by Richard J. Hooker (1969)
- David Brion Davis (1986). Slavery in the Colonial Chesapeake. Colonial Williamsburg. p. 28.
- Cynthia Lynn Lyerly (1998). Methodism and the Southern Mind, 1770-1810. Oxford UP. p. 119ff.
- John A. Ragosta, "Fighting for Freedom: Virginia Dissenters' Struggle for Religious Liberty during the American Revolution," Virginia Magazine of History and Biography, (2008) 116#3 pp. 226–261
- Rhys Isaac, "Evangelical Revolt: The Nature of the Baptists' Challenge to the Traditional Order in Virginia, 1765 To 1775," William and Mary Quarterly (1974) 31#3 pp 345–368 in JSTOR
- Pauline Maier, Ratification: The People Debate the Constitution, 1787–1788 (2010) pp. 235–319
- Peter Kolchin, American Slavery: 1619–1877, New York: Hill and Wang, 1994, p. 73
- Kolchin, American Slavery, p. 81
- Andrew Levy, The First Emancipator: The Forgotten Story of Robert Carter, the Founding Father who freed his slaves, New York: Random House, 2005 ( ISBN 0-375-50865-1)
- Scott Nesbit, "Scales Intimate and Sprawling: Slavery, Emancipation, and the Geography of Marriage in Virginia", Southern Spaces, July 19, 2011. http://southernspaces.org/2011/scales-intimate-and-sprawling-slavery-emancipation-and-geography-marriage-virginia.
- Albert J. Raboteau, Slave Religion: The 'Invisible Institution' in the Antebellum South, New York: Oxford University Press, 2004, p. 137, accessed December 27, 2008
- "Soil exhaustion in the Tidewater became chronic, and the Piedmont was "worn out, washed and gullied." Conditions were better in the Valley of Virginia, where wheat rather than tobacco was dominant, but even there people saw a brighter future outside Virginia." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- "In all, perhaps one million Virginians left the commonwealth between the Revolution and the Civil War." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- "Virginia fell from first to seventh place in population, and its number of congressmen dropped from twenty-three to eleven." http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners
- http://www.vahistorical.org/what-you-can-see/story-virginia/explore-story-virginia/1776-1860/becoming-southerners"Although this mass exodus of Virginians caused the state to slip into a secondary role both politically and economically, these westward-bound settlers spread their culture, laws, political ideas, and labor system across America."
- "Washington Iron Furnace National Register Nomination" (PDF). Virginia Department of Historic Resources. Archived from the original (PDF) on June 23, 2010. Retrieved March 23, 2011.
- S. Sydney Bradford, "The Negro Ironworker in Ante Bellum Virginia," Journal of Southern History, May 1959, Vol. 25 Issue 2, pp. 194–206; Ronald L. Lewis, "The Use and Extent of Slave Labor in the Virginia Iron Industry: The Antebellum Era," West Virginia History, Jan 1977, Vol. 38 Issue 2, pp. 141–156
- For a comparison of Virginia and New Jersey see John Bezis-Selfa, "A Tale of Two Ironworks: Slavery, Free Labor, Work, and Resistance in the Early Republic," William & Mary Quarterly, Oct 1999, Vol. 56 Issue 4, pp. 677–700
- "Archived copy". Archived from the original on February 3, 2008. Retrieved December 4, 2007.
- see "Libby Prison", Encyclopedia Virginia, accessed 21 April 2012
- Aaron Sheehan-Dean, "Everyman's War: Confederate Enlistment in Civil War Virginia," Civil War History, March 2004, Vol. 50 Issue 1, pp. 5–26
- The U.S Constitution requires permission of the old state for a new state to form. David R. Zimring, "'Secession in Favor of the Constitution': How West Virginia Justified Separate Statehood during the Civil War," West Virginia History, (2009) 3#2 pp. 23–51
- Richard O. Curry, A House Divided, Statehood Politics & the Copperhead Movement in West Virginia, (1964), pp. 141–147.
- Curry, A House Divided, pg. 73.
- Curry, A House Divided, pgs. 141–152.
- Charles H. Ambler and Festus P. Summers, West Virginia: The Mountain State ch 15–20
- Otis K. Rice, West Virginia: A History (1985) ch 12–14
- Kenneth C. Martis, The Historical Atlas of the Congresses of the Confederate States of America 1861-1865 (1994) p. 43-53.
- The main scholarly histories are Hamilton James Eckenrode, The Political History of Virginia during the Reconstruction (1904); Richard Lowe, Republicans and Reconstruction in Virginia, 1856–70 (1991); and Jack P. Maddex, Jr., The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970). See also Heinemann et al., New Commonwealth (2007) ch. 11
- Mary Farmer-Kaiser, Freedwomen and the Freedmen's Bureau: Race, Gender, and Public Policy in the Age of Emancipation, (Fordham U.P., 2010), quotes pp. 51, 13
- Richard Lowe, "Another Look at Reconstruction in Virginia," Civil War History, March 1986, Vol. 32 Issue 1, pp. 56–76
- James L. McDonough, "John Schofield as Military Director of Reconstruction in Virginia.," Civil War History, Sept 1969, Vol. 15#3, pp. 237–256
- Heinemann, et al. Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007) p 248.
- Eric Foner, Politics and Ideology in the Age of the Civil War (1980) p 146
- James E. Bond, No Easy Walk to Freedom: Reconstruction and the Ratification of the Fourteenth Amendment (Praeger, 1997) p. 156.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 5
- The Carpetbaggers were Northern whites who had moved to Virginia after the war. Heinemann et al., New Commonwealth (2007) p. 248
- Note: In order to gain public education, black delegates had to accept segregation in the schools.
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 6
- Eckenrode, The Political History of Virginia during the Reconstruction, ch 7
- Walker had 119,535 votes and Wells 101,204. The new Underwood Constitution was approved overwhelmingly, but the disfranchisement clauses were rejected by 3:2 ratios. The new legislature was controlled by the Conservative Party, which soon absorbed the "True Republicans". Eckenrode, The Political History of Virginia during the Reconstruction, p. 411
- Ku Klux Klan chapters were formed in Virginia in the early years after the war, but they played a negligible role in state politics and soon vanished. Heinemann et al., New Commonwealth (2007) p. 249
- Nelson M. Blake, William Mahone of Virginia: Soldier and Political Insurgent (1935)
- Richard Lowe, Republicans and Reconstruction in Virginia, 1856-70 (1991) p 119
- Henry C. Ferrell, Claude A. Swanson of Virginia: a political biography (1985)
- George Harrison Gilliam, "Making Virginia Progressive," Virginia Magazine of History and Biography, 1999, Vol. 107 Issue 2, pp. 189–222
- Lex Renda, "The Advent of Agricultural Progressivism in Virginia," Virginia Magazine of History and Biography, 1988, Vol. 96 Issue 1, pp. 55–82
- Lloyd C. Taylor, Jr. "Lila Meade Valentine: The FFV as Reformer," Virginia Magazine of History and Biography, 1962, Vol. 70 Issue 4, pp. 471–487
- Sara Hunter Graham, "Woman Suffrage In Virginia: The Equal Suffrage League and Pressure-Group Politics, 1909–1920," Virginia Magazine of History and Biography, 1993, Vol. 101 Issue 2, pp. 227–250
- Michael Dennis, "Reforming the 'academical village,'" Virginia Magazine of History and Biography, 1997, Vol. 105 Issue 1, pp. 53–86
- James M. Lindgren, "Virginia Needs Living Heroes": Historic Preservation in the Progressive Era," Public Historian, Jan 1991, Vol. 13 Issue 1, pp. 9–24
- "U-Boat Sinks Schooner Without Any Warning" (PDF). New York Times. August 17, 1918. Retrieved July 28, 2011.
- "RAIDING U-BOAT SINKS 2 NEUTRALS OFF VIRGINIA COAST". New York Times. June 17, 1918. Retrieved July 28, 2011.
- Arlington Connection, Michael Lee Pope, October 14–20, 2009, Alcohol as Budget Savior, page 3
- Morgan Kousser, The Shaping of Southern Politics (1974) p 181; Wallenstein, Cradle of America (2007) p 283–4
- V.O. Key, Jr., Southern Politics (1949) p 32
- Joe Freitus, Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014)
- Charles Johnson, "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- "A Brief History of U.S. Fleet Forces Command". U.S. Fleet Forces Command, USN. Retrieved March 17, 2011.
- "Langley's Role in Project Mercury". NASA Langley Research Center. Retrieved March 20, 2011.
- "Giant Leaps Began With "Little Joe"". NASA Langley Research Center. Retrieved March 20, 2011.
- "Viking: Trialblazer For All Mars Research". NASA Langley Research Center. Retrieved March 20, 2011.
- Benjamin Muse, Virginia's Massive Resistance (1961)
- Wallenstein, Peter (Fall 1997). "Not Fast, But First: The Desegregation of Virginia Tech". VT Magazine. Virginia Tech. Archived from the original on April 13, 2008. Retrieved April 12, 2008.
- Donnelly, Sally B. "D.C. Dotcom." Time August 8, 2000. http://www.time.com/time/magazine/article/0,9171,52073-2,00.html
- Freed, Benjamin (14 September 2016). "70 Percent of the World's Web Traffic Flows Through Loudoun County". Washingtonian.
- LIFE: Mark Warner becomes first U.S. politician to campaign in a video game Archived September 30, 2011, at the Wayback Machine.
- Virginia leads the way
- Virginia First State to Require Internet Safety Lessons
- "Notable dates in Virginia history". Virginia Historical Society.
- Benjamin Vincent (1910), "Virginia", Haydn's Dictionary of Dates (25th ed.), London: Ward, Lock & Co. – via Hathi Trust
- Dabney, Virginius. Virginia: The New Dominion (1971)
- Heinemann, Ronald L., John G. Kolp, Anthony S. Parent Jr., and William G. Shade, Old Dominion, New Commonwealth: A History of Virginia, 1607–2007 (2007). ISBN 978-0-8139-2609-4.
- Kierner, Cynthia A., and Sandra Gioia Treadway. Virginia Women: Their Lives and Times, vol. 1. (University of Georgia Press, 2015) x, 378 pp
- Morse, J. (1797). "Virginia". The American Gazetteer. Boston, Massachusetts: At the presses of S. Hall, and Thomas & Andrews.
- Rubin, Louis D. Virginia: A Bicentennial History. States and the Nation Series. (1977), popular
- Salmon, Emily J., and Edward D.C. Campbell, Jr., eds. The Hornbook of Virginia history: A Ready-Reference Guide to the Old Dominion's People, Places, and Past 4th edition. (1994)
- Wallenstein, Peter. Cradle of America: Four Centuries of Virginia History (2007). ISBN 978-0-7006-1507-0.
- WPA. Virginia: A Guide to the Old Dominion (1940) famous guide to every locality; strong on society, economy and culture online edition
- Younger, Edward, and James Tice Moore, eds. The Governors of Virginia, 1860–1978 (1982)
- Tarter, Brent, "Making History in Virginia," Virginia Magazine of History and Biography Volume: 115. Issue: 1. 2007. pp. 3+. online edition
- Appelbaum, Robert, and John Wood Sweet, eds. Envisioning an English empire: Jamestown and the making of the North Atlantic world (U of Pennsylvania Press, 2011)
- Billings, Warren M., John E. Selby, and Thad W, Tate. Colonial Virginia: A History (1986)
- Bond, Edward L. Damned Souls in the Tobacco Colony: Religion in Seventeenth-Century Virginia (2000),
- Breen T. H. Puritans and Adventurers: Change and Persistence in Early America (1980). 4 chapters on colonial social history online edition
- Breen, T. H. Tobacco Culture: The Mentality of the Great Tidewater Planters on the Eve of Revolution (1985)
- Breen, T. H., and Stephen D. Innes. "Myne Owne Ground": Race and Freedom on Virginia's Eastern Shore, 1640–1676 (1980)
- Brown, Kathleen M. Good Wives, Nasty Wenches, and Anxious Patriarchs: Gender, Race, and Power in Colonial Virginia (1996) excerpt and text search
- Byrd, William. The Secret Diary of William Byrd of Westover, 1709–1712 (1941) ed by Louis B. Wright and Marion Tinling online edition; famous primary source; very candid about his priivate life
- Bruce, Philip Alexander. Institutional History of Virginia in the Seventeenth Century: An Inquiry into the Religious, Moral, Educational, Legal, Military, and Political Condition of the People, Based on Original and Contemporaneous Records (1910) online edition
- Coombs, John C., "The Phases of Conversion: A New Chronology for the Rise of Slavery in Early Virginia," William and Mary Quarterly, 68 (July 2011), 332–60.
- Davis, Richard Beale. Intellectual Life in the Colonial South, 1585-1763 * 3 vol 1978), detailed coverage of Virginia
- Freeman, Douglas Southall; George Washington: A Biography Volume: 1–7. (1948). Pulitzer Prize. vol 1 online
- Gleach; Frederic W. Powhatan's World and Colonial Virginia: A Conflict of Cultures (1997).
- Alexander B. Haskell, For God, King, and People: Forging Commonwealth Bonds in Renaissance Virginia. Chapel Hill, NC: University of North Carolina Press, 2017.
- Isaac, Rhys. Landon Carter's Uneasy Kingdom: Revolution and Rebellion on a Virginia Plantation (2004)]
- Isaac, Rhys. The Transformation of Virginia, 1740–1790 (1982, 1999) Pulitzer Prize winner, dealing with religion and morality online review
- Kolp, John Gilman. Gentlemen and Freeholders: Electoral Politics in Colonial Virginia (Johns Hopkins U.P. 1998)
- Menard, Russell R. "The Tobacco Industry in the Chesapeake Colonies, 1617–1730: An Interpretation." Research In Economic History 1980 5: 109–177. 0363–3268 the standard scholarly study
- Mook, Maurice A. "The Aboriginal Population of Tidewater Virginia." American Anthropologist (1944) 46#2 pp: 193-208. online
- Morgan, Edmund S. Virginians at Home: Family Life in the Eighteenth Century (1952). online edition
- Morgan, Edmund S. "Slavery and Freedom: The American Paradox." Journal of American History 1972 59(1): 5–29
- Morgan, Edmund S. American Slavery, American Freedom: The Ordeal of Colonial Virginia (1975) online edition highly influential study
- Nelson, John A Blessed Company: Parishes, Parsons, and Parishioners in Anglican Virginia, 1690–1776 (2001)
- Price, David A. Love and Hate in Jamestown: John Smith, Pocahontas, and the Start of a New Nation (2005)
- Rasmussen, William M.S. and Robert S. Tilton. Old Virginia: The Pursuit of a Pastoral Ideal (2003)
- Roeber, A. G. Faithful Magistrates and Republican Lawyers: Creators of Virginia Legal Culture, 1680–1810 (1981)
- Rountree, Helen C. Pocahontas, Powhatan, Opechancanough: Three Indian Lives Changed by Jamestown (University of Virginia press, 2005), early Virginia history from an Indian perspective by a scholar
- Rutman, Darrett B., and Anita H. Rutman. A Place in Time: Middlesex County, Virginia, 1650–1750 (1984), new social history
- Sheehan, Bernard. Savagism and civility: Indians and Englishmen in colonial Virginia (Cambridge UP, 1980.)
- Wertenbaker, Thomas J. The Shaping of Colonial Virginia, comprising Patrician and Plebeian in Virginia (1910) full text online; Virginia under the Stuarts (1914) full text online; and The Planters of Colonial Virginia (1922) full text online; well written but outdated
- Wright, Louis B. The First Gentlemen of Virginia: Intellectual Qualities of the Early Colonial Ruling Class (1964)
- Adams, Sean Patrick. Old Dominion, Industrial Commonwealth: Coal, Politics, and Economy in Antebellum America (2004)
- Ambler, Charles H. Sectionalism in Virginia from 1776 to 1861 (1910) full text online
- Beeman, Richard R. The Old Dominion and the New Nation, 1788–1801 (1972)
- Dill, Alonzo Thomas. "Sectional Conflict in Colonial Virginia," Virginia Magazine of History and Biography 87 (1979): 300–315.
- Lebsock, Suzanne D. A Share of Honor: Virginia Women, 1600–1945 (1984)
- Link, William A. Roots of Secession: Slavery and Politics in Antebellum Virginia (2007) excerpt and text search
- Majewski, John D. A House Dividing: Economic Development in Pennsylvania and Virginia Before the Civil War (2006) excerpt and text search
- Risjord, Norman K. Chesapeake Politics, 1781–1800 (1978). in-depth coverage of Virginia, Maryland and North Carolina online edition
- Selby, John E. The Revolution in Virginia, 1775–1783 (1988)
- Shade, William G. Democratizing the Old Dominion: Virginia and the Second Party System 1824–1861 (1996)
- Taylor, Alan. The Internal Enemy: Slavery and War in Virginia, 1772-1832 (2014). 624 pp online review
- Tillson, Jr. Albert H. Gentry and Common Folk: Political Culture on a Virginia Frontier, 1740–1789 (1991),
- Varon; Elizabeth R. We Mean to Be Counted: White Women and Politics in Antebellum Virginia (1998)
- Virginia State Dept. of Education. The Road to Independence: Virginia 1763–1783 online edition; 80pp; with student projects
- Blair, William. Virginia's Private War: Feeding Body and Soul in the Confederacy, 1861–1865 (1998) online edition
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Eckenrode, Hamilton James. The political history of Virginia during the Reconstruction, (1904) online edition
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Lankford, Nelson. Richmond Burning: The Last Days of the Confederate Capital (2002)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Lowe, Richard. Republicans and Reconstruction in Virginia, 1856–70 (1991)
- Maddex, Jr., Jack P. The Virginia Conservatives, 1867–1879: A Study in Reconstruction Politics (1970).
- Majewski, John. A House Dividing: Economic Development in Pennsylvania and Virginia before the Civil War (2000)
- Noe, Kenneth W. Southwest Virginia's Railroad: Modernization and the Sectional Crisis (1994)
- Robertson, James I. Civil War Virginia: Battleground for a Nation (1993) 197 pages; excerpt and text search
- Shanks, Henry T. The Secession Movement in Virginia, 1847–1861 (1934) online edition
- Sheehan-Dean, Aaron Charles. Why Confederates fought: family and nation in Civil War Virginia (2007) 291 pages excerpt and text search
- Simpson, Craig M. A Good Southerner: The Life of Henry A. Wise of Virginia (1985), wide-ranging political history
- Wallenstein, Peter, and Bertram Wyatt-Brown, eds. Virginia's Civil War (2008) excerpt and text search
- Wills, Brian Steel. The war hits home: the Civil War in southeastern Virginia (2001) 345 pages; excerpt and text search
- Brundage, W. Fitzhugh. Lynching in the New South: Georgia and Virginia, 1880–1930 (1993)
- Buni, Andrew. The Negro in Virginia Politics, 1902–1965 (1967)
- Crofts, Daniel W. Reluctant Confederates: Upper South Unionists in the Secession Crisis (1989)
- Ferrell, Henry C., Jr. Claude A. Swanson of Virginia: A Political Biography (1985) early 20th century
- Freitus, Joe. Virginia in the War Years, 1938-1945: Military Bases, the U-Boat War and Daily Life (McFarland, 2014) online review
- Gilliam, George H. "Making Virginia Progressive: Courts and Parties, Railroads and Regulators, 1890–1910." Virginia Magazine of History and Biography 107 (Spring 1999): 189–222.
- Heinemann, Ronald L. Depression and the New Deal in Virginia: The Enduring Dominion (1983)
- Heinemann, Ronald L. Harry Byrd of Virginia (1996)
- Heinemann, Ronald L. "Virginia in the Twentieth Century: Recent Interpretations." Virginia Magazine of History and Biography 94 (April 1986): 131–60.
- Hunter, Robert F. "Virginia and the New Deal," in John Braeman et al. eds. The New Deal: Volume Two – the State and Local Levels (1975) pp. 103–36
- Johnson, Charles. "V for Virginia: The Commonwealth Goes to War," Virginia Magazine of History and Biography 100 (1992): 365–398 in JSTOR
- Kerr-Ritchie, Jeffrey R. Freedpeople in the Tobacco South: Virginia, 1860–1900 (1999)
- Key, V. O., Jr. Southern Politics in State and Nation (1949), important chapter on Virginia in the 1940s
- Lassiter, Matthew D., and Andrew B. Lewis, eds. The Moderates' Dilemma: Massive Resistance to School Desegregation in Virginia (1998)
- Lebsock, Suzanne D. "A Share of Honor": Virginia Women, 1600–1945 (1984)
- Link, William A. A Hard Country and a Lonely Place: Schooling, Society, and Reform in Rural Virginia, 1870–1920 (1986)
- Martin-Perdue, Nancy J., and Charles L. Perdue Jr., eds. Talk about Trouble: A New Deal Portrait of Virginians in the Great Depression (1996)
- Moger, Allen W. Virginia: Bourbonism to Byrd, 1870–1925 (1968)
- Muse, Benjamin. Virginia's Massive Resistance (1961)
- Pulley, Raymond H. Old Virginia Restored: An Interpretation of the Progressive Impulse, 1870–1930 (1968)
- Shiftlett, Crandall. Patronage and Poverty in the Tobacco South: Louisa County, Virginia, 1860–1900 (1982), new social history
- Smith, J. Douglas. Managing White Supremacy: Race, Politics, and Citizenship in Jim Crow Virginia (2002)
- Sweeney, James R. "Rum, Romanism, and Virginia Democrats: The Party Leaders and the Campaign of 1928" Virginia Magazine of History and Biography 90 (October 1982): 403–31.
- Wilkinson, J. Harvie, III. Harry Byrd and the Changing Face of Virginia Politics, 1945–1966 (1968)
- Wynes, Charles E. Race Relations in Virginia, 1870–1902 (1961)
- Adams, Stephen. The Best and Worst Country in the World: Perspectives on the Early Virginia Landscape (2002) excerpt and text search
- Gottmann, Jean. Virginia at mid-century (1955), by a leading geographer
- Gottmann, Jean. Virginia in Our Century (1969)
- Kirby, Jack Temple. "Virginia'S Environmental History: A Prospectus," Virginia Magazine of History and Biography, 1991, Vol. 99 Issue 4, pp. 449–488
- *Parramore, Thomas C., with Peter C. Stewart and Tommy L. Bogger. Norfolk: The First Four Centuries (1994)
- Terwilliger, Karen. Virginia's Endangered Species (2001), esp. ch 1
- Sawyer, Roy T. America's Wetland: An Environmental and Cultural History of Tidewater Virginia and North Carolina (University of Virginia Press; 2010) 248 pages; traces the human impact on the ecosystem of the Tidewater region.
- Jefferson, Thomas. Notes on the State of Virginia
- Duke, Maurice, and Daniel P. Jordan, eds. A Richmond Reader, 1733–1983 (1983)
- Eisenberg, Ralph. Virginia Votes, 1924–1968 (1971), all statistics
- Encyclopedia Virginia
- Virginia Historical Society short history of state, with teacher guide
- Virginia Memory, digital collections and online classroom of the Library of Virginia
- How Counties Got Started in Virginia
- Union or Secession: Virginians Decide
- Virginia and the Civil War
- Civil War timeline
- Boston Public Library, Map Center. Maps of Virginia, various dates. |
Our editors will review what you’ve submitted and determine whether to revise the article.
- Key People:
- James H. Wilkinson
linear algebra, mathematical discipline that deals with vectors and matrices and, more generally, with vector spaces and linear transformations. Unlike other parts of mathematics that are frequently invigorated by new ideas and unsolved problems, linear algebra is very well understood. Its value lies in its many applications, from mathematical physics to modern algebra and coding theory.
Vectors and vector spaces
Linear algebra usually starts with the study of vectors, which are understood as quantities having both magnitude and direction. Vectors lend themselves readily to physical applications. For example, consider a solid object that is free to move in any direction. When two forces act at the same time on this object, they produce a combined effect that is the same as a single force. To picture this, represent the two forces v and w as arrows; the direction of each arrow gives the direction of the force, and its length gives the magnitude of the force. The single force that results from combining v and w is called their sum, written v + w. In the parallelogram formed from adjacent sides represented by v and w., v + w corresponds to the diagonal of the
Vectors are often expressed using coordinates. For example, in two dimensions a vector can be defined by a pair of coordinates (a1, a2) describing an arrow going from the origin (0, 0) to the point (a1, a2). If one vector is (a1, a2) and another is (b1, b2), then their sum is (a1 + b1, a2 + b2); this gives the same result as the parallelogram (see the ). In three dimensions a vector is expressed using three coordinates (a1, a2, a3), and this idea extends to any number of dimensions.
Representing vectors as arrows in two or three dimensions is a starting point, but linear algebra has been applied in contexts where this is no longer appropriate. For example, in some types of differential equations the sum of two solutions gives a third solution, and any constant multiple of a solution is also a solution. In such cases the solutions can be treated as vectors, and the set of solutions is a vector space in the following sense. In a vector space any two vectors can be added together to give another vector, and vectors can be multiplied by numbers to give “shorter” or “longer” vectors. The numbers are called scalars because in early examples they were ordinary numbers that altered the scale, or length, of a vector. For example, if v is a vector and 2 is a scalar, then 2v is a vector in the same direction as v but twice as long. In many modern applications of linear algebra, scalars are no longer ordinary real numbers, but the important thing is that they can be combined among themselves by addition, subtraction, multiplication, and division. For example, the scalars may be complex numbers, or they may be elements of a finite field such as the field having only the two elements 0 and 1, where 1 + 1 = 0. The coordinates of a vector are scalars, and when these scalars are from the field of two elements, each coordinate is 0 or 1, so each vector can be viewed as a particular sequence of 0s and 1s. This is very useful in digital processing, where such sequences are used to encode and transmit data.
Linear transformations and matrices
Vector spaces are one of the two main ingredients of linear algebra, the other being linear transformations (or “operators” in the parlance of physicists). Linear transformations are functions that send, or “map,” one vector to another vector. The simplest example of a linear transformation sends each vector to c times itself, where c is some constant. Thus, every vector remains in the same direction, but all lengths are multiplied by c. Another example is a rotation, which leaves all lengths the same but alters the directions of the vectors. Linear refers to the fact that the transformation preserves vector addition and scalar multiplication. This means that if T is a linear transformation sending a vector v to T(v), then for any vectors v and w, and any scalar c, the transformation must satisfy the properties T(v + w) = T(v) + T(w) and T(cv) = cT(v).
When doing computations, linear transformations are treated as matrices. A matrix is a rectangular arrangement of scalars, and two matrices can be added or multiplied as shown in the Click Here to see full-size tabletable. The product of two matrices shows the result of doing one transformation followed by another (from right to left), and if the transformations are done in reverse order the result is usually different. Thus, the product of two matrices depends on the order of multiplication; if S and T are square matrices (matrices with the same number of rows as columns) of the same size, then ST and TS are rarely equal. The matrix for a given transformation is found using coordinates. For example, in two dimensions a linear transformation T can be completely determined simply by knowing its effect on any two vectors v and w that have different directions. Their transformations T(v) and T(w) are given by two coordinates; therefore, only four coordinates, two for T(v) and two for T(w), are needed to specify T. These four coordinates are arranged in a 2-by-2 matrix. In three dimensions three vectors u, v, and w are needed, and to specify T(u), T(v), and T(w) one needs three coordinates for each. This results in a 3-by-3 matrix. |
Personality disorders (PD) are a class of mental disorders characterized by enduring maladaptive patterns of behavior, cognition, and inner experience, exhibited across many contexts and deviating from those accepted by the individual's culture. These patterns develop early, are inflexible, and are associated with significant distress or disability. The definitions may vary somewhat, according to source. Official criteria for diagnosing personality disorders are listed in the Diagnostic and Statistical Manual of Mental Disorders (DSM) and the fifth chapter of the International Classification of Diseases (ICD), which are the world's two most dominant diagnostic systems.[according to whom?]
|Specialty||Psychiatry; clinical psychology|
Personality, defined psychologically, is the set of enduring behavioral and mental traits that distinguish between individual humans. Hence, personality disorders are defined by experiences and behaviors that differ from social norms and expectations. Those diagnosed with a personality disorder may experience difficulties in cognition, emotiveness, interpersonal functioning, or impulse control. In general, personality disorders are diagnosed in 40–60% of psychiatric patients, making them the most frequent of psychiatric diagnoses.
Personality disorders are characterized by an enduring collection of behavioral patterns often associated with considerable personal, social, and occupational disruption. Personality disorders are also inflexible and pervasive across many situations, largely due to the fact that such behavior may be ego-syntonic (i.e. the patterns are consistent with the ego integrity of the individual) and are therefore perceived to be appropriate by that individual. This behavior can result in maladaptive coping skills and may lead to personal problems that induce extreme anxiety, distress, or depression. These behaviour patterns are typically recognized in adolescence, the beginning of adulthood or sometimes even childhood and often have a pervasive negative impact on the quality of life.
Many issues occur with classifying a personality disorder. Because the theory and diagnosis of personality disorders occur within prevailing cultural expectations, their validity is contested by some experts on the basis of inevitable subjectivity. They argue that the theory and diagnosis of personality disorders are based strictly on social, or even sociopolitical and economic considerations.
The two relevant major systems of classification are
- the International Classification of Diseases (10th revision, ICD-10) published by the World Health Organization
- the Diagnostic and Statistical Manual of Mental Disorders (Fifth Edition, DSM-5) by the American Psychiatric Association.
Both have deliberately merged their diagnoses to some extent, but some differences remain. For example, ICD-10 does not include narcissistic personality disorder as a distinct category, while DSM-5 does not include enduring personality change after catastrophic experience or after psychiatric illness. ICD-10 classifies the DSM-5 schizotypal personality disorder as a form of schizophrenia rather than as a personality disorder. There are accepted diagnostic issues and controversies with regard to distinguishing particular personality disorder categories from each other.
Both diagnostic systems provide a definition and six criteria for a general personality disorder. These criteria should be met by all personality disorder cases before a more specific diagnosis can be made.
- Markedly disharmonious attitudes and behavior, generally involving several areas of functioning; e.g. affectivity, arousal, impulse control, ways of perceiving and thinking, and style of relating to others;
- The abnormal behavior pattern is enduring, of long standing, and not limited to episodes of mental illness;
- The abnormal behavior pattern is pervasive and clearly maladaptive to a broad range of personal and social situations;
- The above manifestations always appear during childhood or adolescence and continue into adulthood;
- The disorder leads to considerable personal distress but this may only become apparent late in its course;
- The disorder is usually, but not invariably, associated with significant problems in occupational and social performance.
The ICD adds: "For different cultures it may be necessary to develop specific sets of criteria with regard to social norms, rules and obligations."
- An enduring pattern of inner experience and behavior that deviates markedly from the expectations of the individual’s culture. This pattern is manifested in two (or more) of the following areas:
- Cognition (i.e., ways of perceiving and interpreting self, other people, and events).
- Affectivity (i.e., the range, intensity, lability, and appropriateness of emotional response).
- Interpersonal functioning.
- Impulse control.
- The enduring pattern is inflexible and pervasive across a broad range of personal and social situations.
- The enduring pattern leads to clinically significant distress or impairment in social, occupational, or other important areas of functioning.
- The pattern is stable and of long duration, and its onset can be traced back at least to adolescence or early adulthood.
- The enduring pattern is not better explained as a manifestation or consequence of another mental disorder.
- The enduring pattern is not attributable to the physiological effects of a substance (e.g., a drug of abuse, a medication) or another medical condition (e.g., head trauma).
Chapter V in the ICD-10 contains the mental and behavioral disorders and includes categories of personality disorder and enduring personality changes. They are defined as ingrained patterns indicated by inflexible and disabling responses that significantly differ from how the average person in the culture perceives, thinks, and feels, particularly in relating to others.
Besides the ten specific PD, there are the following categories:
- Other specific personality disorders (involves PD characterized as eccentric, haltlose, immature, narcissistic, passive–aggressive, or psychoneurotic.)
- Personality disorder, unspecified (includes "character neurosis" and "pathological personality").
- Mixed and other personality disorders (defined as conditions that are often troublesome but do not demonstrate the specific pattern of symptoms in the named disorders).
- Enduring personality changes, not attributable to brain damage and disease (this is for conditions that seem to arise in adults without a diagnosis of personality disorder, following catastrophic or prolonged stress or other psychiatric illness).
In the proposed revision of ICD-11, all discrete personality disorder diagnoses will be removed and replaced by the single diagnosis "personality disorder". Instead, there will be specifiers called "prominent personality traits" and the possibility to classify degrees of severity ranging from "mild", "moderate", and "severe" based on the dysfunction in interpersonal relationships and everyday life of the patient.
The most recent fifth edition of the Diagnostic and Statistical Manual of Mental Disorders stresses a personality disorder is an enduring and inflexible pattern of long duration leading to significant distress or impairment and is not due to use of substances or another medical condition. The DSM-5 lists personality disorders in the same way as other mental disorders, rather than on a separate 'axis', as previously.
The DSM-5 also contains three diagnoses for personality patterns not matching these ten disorders, but nevertheless exhibit characteristics of a personality disorder:
- Personality change due to another medical condition – personality disturbance due to the direct effects of a medical condition.
- Other specified personality disorder – general criteria for a personality disorder are met but fails to meet the criteria for a specific disorder, with the reason given.
- Unspecified personality disorder – general criteria for a personality disorder are met but the personality disorder is not included in the DSM-5 classification.
The specific personality disorders are grouped into the following three clusters based on descriptive similarities:
Cluster A (odd or eccentric disorders)Edit
Cluster A personality disorders are often associated with schizophrenia: in particular, schizotypal personality disorder shares some of its hallmark symptoms, e.g., acute discomfort in close relationships, cognitive or perceptual distortions, and eccentricities of behavior, with schizophrenia. However, people diagnosed with odd-eccentric personality disorders tend to have a greater grasp on reality than do those diagnosed with schizophrenia. Patients suffering from these disorders can be paranoid and have difficulty being understood by others, as they often have odd or eccentric modes of speaking and an unwillingness and inability to form and maintain close relationships. Though their perceptions may be unusual, these anomalies are distinguished from delusions or hallucinations as people suffering from these would be diagnosed with other conditions. Significant evidence suggests a small proportion of people with Type A personality disorders, especially schizotypal personality disorder, have the potential to develop schizophrenia and other psychotic disorders. These disorders also have a higher probability of occurring among individuals whose first-degree relatives have either schizophrenia or a Cluster A personality disorder.
- Paranoid personality disorder: characterized by a pattern of irrational suspicion and mistrust of others, interpreting motivations as malevolent.
- Schizoid personality disorder: lack of interest and detachment from social relationships, apathy, and restricted emotional expression.
- Schizotypal personality disorder: pattern of extreme discomfort interacting socially, and distorted cognitions and perceptions.
Cluster B (dramatic, emotional or erratic disorders)Edit
- Antisocial personality disorder: pervasive pattern of disregard for and violation of the rights of others, lack of empathy, bloated self-image, manipulative and impulsive behavior.
- Borderline personality disorder: pervasive pattern of abrupt mood swings, instability in relationships, self-image, identity, behavior and affect, often leading to self-harm and impulsivity.
- Histrionic personality disorder: pervasive pattern of attention-seeking behavior and excessive emotions.
- Narcissistic personality disorder: pervasive pattern of grandiosity, need for admiration, and a perceived or real lack of empathy.
Cluster C (anxious or fearful disorders)Edit
- Avoidant personality disorder: pervasive feelings of social inhibition and inadequacy, extreme sensitivity to negative evaluation.
- Dependent personality disorder: pervasive psychological need to be cared for by other people.
- Obsessive-compulsive personality disorder: characterized by rigid conformity to rules, perfectionism, and control to the point of satisfaction and exclusion of leisurely activities and friendships (distinct from obsessive-compulsive disorder).
Other personality typesEdit
Some types of personality disorder were in previous versions of the diagnostic manuals but have been deleted. Examples include sadistic personality disorder (pervasive pattern of cruel, demeaning, and aggressive behavior) and self-defeating personality disorder or masochistic personality disorder (characterised by behaviour consequently undermining the person's pleasure and goals). They were listed in the DSM-III-R appendix as "Proposed diagnostic categories needing further study" without specific criteria. The psychologist Theodore Millon and others consider some relegated diagnoses to be equally valid disorders, and may also propose other personality disorders or subtypes, including mixtures of aspects of different categories of the officially accepted diagnoses.
|Sexual deviation[d]||Reclassified:16, 18|
Psychologist Theodore Millon, who has written numerous popular works on personality, proposed the following description of personality disorders:
|Type of personality disorder||Description|
|Paranoid||Guarded, defensive, distrustful and suspicious. Hypervigilant to the motives of others to undermine or do harm. Always seeking confirmatory evidence of hidden schemes. Feel righteous, but persecuted. People with paranoid personality disorder experience a pattern of pervasive distrust and suspicion of others that lasts a long time. They are generally difficult to work with and are very hard to form relationships with. They are also known to be somewhat short-tempered.[unreliable medical source?]|
|Schizoid||Apathetic, indifferent, remote, solitary, distant, humorless. Neither desire nor need human attachments. Withdrawn from relationships and prefer to be alone. Little interest in others, often seen as a loner. Minimal awareness of the feelings of themselves or others. Few drives or ambitions, if any. Is an uncommon condition in which people avoid social activities and consistently shy away from interaction with others. It affects more males than females. To others, they may appear somewhat dull or humorless. Because they don't tend to show emotion, they may appear as though they don't care about what's going on around them.|
|Schizotypal||Eccentric, self-estranged, bizarre, absent. Exhibit peculiar mannerisms and behaviors. Think they can read thoughts of others. Preoccupied with odd daydreams and beliefs. Blur line between reality and fantasy. Magical thinking and strange beliefs. People with schizotypal personality disorder are often described as odd or eccentric and usually have few, if any, close relationships. They generally don't understand how relationships form or the impact of their behavior on others.|
|Antisocial||Impulsive, irresponsible, deviant, unruly. Act without due consideration. Meet social obligations only when self-serving. Disrespect societal customs, rules, and standards. See themselves as free and independent. People with antisocial personality disorder depict a long pattern of disregard for other people's rights. They often cross the line and violate these rights.[unreliable medical source?]|
|Borderline||Unpredictable, manipulative, unstable. Frantically fears abandonment and isolation. Experience rapidly fluctuating moods. Shift rapidly between loving and hating. See themselves and others alternatively as all-good and all-bad. Unstable and frequently changing moods. People with borderline personality disorder have a pervasive pattern of instability in interpersonal relationships.[unreliable medical source?]|
|Histrionic||Dramatic, seductive, shallow, stimulus-seeking, vain. Overreact to minor events. Exhibitionistic as a means of securing attention and favors. See themselves as attractive and charming. Constantly seeking others' attention. Disorder is characterized by constant attention-seeking, emotional overreaction, and suggestibility. Their tendency to over-dramatize may impair relationships and lead to depression, but they are often high-functioning.[unreliable medical source?]|
|Narcissistic||Egotistical, arrogant, grandiose, insouciant. Preoccupied with fantasies of success, beauty, or achievement. See themselves as admirable and superior, and therefore entitled to special treatment. Is a mental disorder in which people have an inflated sense of their own importance and a deep need for admiration. Those with narcissistic personality disorder believe that they're superior to others and have little regard for other people's feelings.|
|Avoidant||Hesitant, self-conscious, embarrassed, anxious. Tense in social situations due to fear of rejection. Plagued by constant performance anxiety. See themselves as inept, inferior, or unappealing. They experience long-standing feelings of inadequacy and are very sensitive of what others think about them.[unreliable medical source?]|
|Dependent||Helpless, incompetent, submissive, immature. Withdrawn from adult responsibilities. See themselves as weak or fragile. Seek constant reassurance from stronger figures. They have the need to be taken care of by a person. They fear being abandoned or separated from important people in their life.[unreliable medical source?]|
|Obsessive–compulsive||Restrained, conscientious, respectful, rigid. Maintain a rule-bound lifestyle. Adhere closely to social conventions. See the world in terms of regulations and hierarchies. See themselves as devoted, reliable, efficient, and productive.|
|Depressive||Somber, discouraged, pessimistic, brooding, fatalistic. Present themselves as vulnerable and abandoned. Feel valueless, guilty, and impotent. Judge themselves as worthy only of criticism and contempt. Hopeless, Suicidal, Restless. This disorder can lead to aggressive acts and hallucinations.[unreliable medical source?]|
|Passive–aggressive (Negativistic)||Resentful, contrary, skeptical, discontented. Resist fulfilling others’ expectations. Deliberately inefficient. Vent anger indirectly by undermining others’ goals. Alternately moody and irritable, then sullen and withdrawn. Withhold emotions. Will not communicate when there is something problematic to discuss.[unreliable medical source?]|
|Sadistic||Explosively hostile, abrasive, cruel, dogmatic. Liable to sudden outbursts of rage. Gain satisfaction through dominating, intimidating and humiliating others. They are opinionated and close-minded. Enjoy performing brutal acts on others. Find pleasure in abusing others. Would likely engage in a sadomasochist relationship, but will not play the role of a masochist.[unreliable medical source?]|
|Self-defeating (Masochistic)||Deferential, pleasure-phobic, servile, blameful, self-effacing. Encourage others to take advantage of them. Deliberately defeat own achievements. Seek condemning or mistreatful partners. They are suspicious of people who treat them well. Would likely engage in a sadomasochist relationship.[unreliable medical source?]|
This involves both the notion of personality difficulty as a measure of subthreshold scores for personality disorder using standard interviews and the evidence that those with the most severe personality disorders demonstrate a “ripple effect” of personality disturbance across the whole range of mental disorders. In addition to subthreshold (personality difficulty) and single cluster (simple personality disorder), this also derives complex or diffuse personality disorder (two or more clusters of personality disorder present) and can also derive severe personality disorder for those of greatest risk.
|Level of Severity||Description||Definition by Categorical System|
|0||No Personality Disorder||Does not meet actual or subthreshold criteria for any personality disorder|
|1||Personality Difficulty||Meets sub-threshold criteria for one or several personality disorders|
|2||Simple Personality Disorder||Meets actual criteria for one or more personality disorders within the same cluster|
|3||Complex (Diffuse) Personality Disorder||Meets actual criteria for one or more personality disorders within more than one cluster|
|4||Severe Personality Disorder||Meets criteria for creation of severe disruption to both individual and to many in society|
There are several advantages to classifying personality disorder by severity:
- It not only allows for but also takes advantage of the tendency for personality disorders to be comorbid with each other.
- It represents the influence of personality disorder on clinical outcome more satisfactorily than the simple dichotomous system of no personality disorder versus personality disorder.
- This system accommodates the new diagnosis of severe personality disorder, particularly "dangerous and severe personality disorder" (DSPD).
Social function is affected by many other aspects of mental functioning apart from that of personality. However, whenever there is persistently impaired social functioning in conditions in which it would normally not be expected, the evidence suggests that this is more likely to be created by personality abnormality than by other clinical variables. The Personality Assessment Schedule gives social function priority in creating a hierarchy in which the personality disorder creating the greater social dysfunction is given primacy over others in a subsequent description of personality disorder.
Many who have a personality disorder do not recognize any abnormality and defend valiantly their continued occupancy of their personality role. This group have been termed the Type R, or treatment-resisting personality disorders, as opposed to the Type S or treatment-seeking ones, who are keen on altering their personality disorders and sometimes clamor for treatment. The classification of 68 personality disordered patients on the caseload of an assertive community team using a simple scale showed a 3 to 1 ratio between Type R and Type S personality disorders with Cluster C personality disorders being significantly more likely to be Type S, and paranoid and schizoid (Cluster A) personality disorders significantly more likely to be Type R than others.
It is generally assumed that all personality disorders are linked to impaired functioning and a reduced quality of life (QoL) because that is a basic diagnostic requirement. But research shows that this may be true only for some types of personality disorder.
In several studies, higher disability and lower QoL were predicted by avoidant, dependent, schizoid, paranoid, schizotypal and antisocial personality disorder. This link is particurlarly strong for avoidant, schizotypal and borderline PD. However, obsessive-compulsive PD was not related to a compromised QoL or dysfunction. A prospective study reported that all PD were associated with significant impairment 15 years later, except for obsessive compulsive and narcissistic personality disorder.
One study investigated some aspects of "life success" (status, wealth and successful intimate relationships). It showed somewhat poor functioning for schizotypal, antisocial, borderline and dependent PD, schizoid PD had the lowest scores regarding these variables. Paranoid, histrionic and avoidant PD were average. Narcissistic and obsessive-compulsive PD, however, had high functioning and appeared to contribute rather positively to these aspects of life success.
There is also a direct relationship between the number of diagnostic criteria and quality of life. For each additional personality disorder criterion that a person meets there is an even reduction in quality of life.
In the workplaceEdit
Depending on the diagnosis, severity and individual, and the job itself, personality disorders can be associated with difficulty coping with work or the workplace—potentially leading to problems with others by interfering with interpersonal relationships. Indirect effects also play a role; for example, impaired educational progress or complications outside of work, such as substance abuse and co-morbid mental disorders, can plague sufferers. However, personality disorders can also bring about above-average work abilities by increasing competitive drive or causing the sufferer to exploit his or her co-workers.
In 2005 and again in 2009, psychologists Belinda Board and Katarina Fritzon at the University of Surrey, UK, interviewed and gave personality tests to high-level British executives and compared their profiles with those of criminal psychiatric patients at Broadmoor Hospital in the UK. They found that three out of eleven personality disorders were actually more common in executives than in the disturbed criminals:
- Histrionic personality disorder: including superficial charm, insincerity, egocentricity and manipulation
- Narcissistic personality disorder: including grandiosity, self-focused lack of empathy for others, exploitativeness and independence.
- Obsessive-compulsive personality disorder: including perfectionism, excessive devotion to work, rigidity, stubbornness and dictatorial tendencies.
Early stages and preliminary forms of personality disorders need a multi-dimensional and early treatment approach. Personality development disorder is considered to be a childhood risk factor or early stage of a later personality disorder in adulthood. In addition, in Robert F. Krueger's review of their research indicates that some children and adolescents do suffer from clinically significant syndromes that resemble adult personality disorders, and that these syndromes have meaningful correlates and are consequential. Much of this research has been framed by the adult personality disorder constructs from Axis II of the Diagnostic and Statistical Manual. Hence, they are less likely to encounter the first risk they described at the outset of their review: clinicians and researchers are not simply avoiding use of the PD construct in youth. However, they may encounter the second risk they described: under-appreciation of the developmental context in which these syndromes occur. That is, although PD constructs show continuity over time, they are probabilistic predictors; not all youths who exhibit PD symptomatology become adult PD cases.
Versus mental disordersEdit
The disorders in each of the three clusters may share with each other underlying common vulnerability factors involving cognition, affect and impulse control, and behavioral maintenance or inhibition, respectively. But they may also have a spectrum relationship to certain syndromal mental disorders:
- Paranoid, schizoid or schizotypal personality disorders may be observed to be premorbid antecedents of delusional disorders or schizophrenia.
- Borderline personality disorder is seen in association with mood and anxiety disorders, with impulse control disorders, eating disorders, ADHD, or a substance use disorder. It is sometimes seen as a mild form of bipolar disorder.
- Avoidant personality disorder is seen with social anxiety disorder.
Versus normal personalityEdit
The issue of the relationship between normal personality and personality disorders is one of the important issues in personality and clinical psychology. The personality disorders classification (DSM-5 and ICD-10) follows a categorical approach that views personality disorders as discrete entities that are distinct from each other and from normal personality. In contrast, the dimensional approach is an alternative approach that personality disorders represent maladaptive extensions of the same traits that describe normal personality.
Thomas Widiger and his collaborators have contributed to this debate significantly. He discussed the constraints of the categorical approach and argued for the dimensional approach to the personality disorders. Specifically, he proposed the Five Factor Model of personality as an alternative to the classification of personality disorders. For example, this view specifies that Borderline Personality Disorder can be understood as a combination of emotional lability (i.e., high neuroticism), impulsivity (i.e., low conscientiousness), and hostility (i.e., low agreeableness). Many studies across cultures have explored the relationship between personality disorders and the Five Factor Model. This research has demonstrated that personality disorders largely correlate in expected ways with measures of the Five Factor Model and has set the stage for including the Five Factor Model within DSM-5.
In clinical practice, individuals are generally diagnosed by an interview with a psychiatrist based on a mental status examination, which may take into account observations by relatives and others. One tool of diagnosing personality disorders is a process involving interviews with scoring systems. The patient is asked to answer questions, and depending on their answers, the trained interviewer tries to code what their responses were. This process is fairly time consuming.
|Neuroticism (vs. emotional stability)|
|Anxiousness (vs. unconcerned)||N/A||N/A||High||Low||High||N/A||N/A||High||High||High||N/A||N/A||N/A||N/A|
|Angry hostility (vs. dispassionate)||High||N/A||N/A||High||High||N/A||High||N/A||N/A||N/A||High||N/A||N/A||N/A|
|Depressiveness (vs. optimistic)||N/A||N/A||N/A||N/A||High||N/A||N/A||N/A||N/A||N/A||N/A||High||N/A||N/A|
|Self-consciousness (vs. shameless)||N/A||N/A||High||Low||N/A||Low||Low||High||High||N/A||N/A||High||N/A||N/A|
|Impulsivity (vs. restrained)||N/A||N/A||N/A||High||High||High||N/A||Low||N/A||Low||N/A||N/A||N/A||N/A|
|Vulnerability (vs. fearless)||N/A||N/A||N/A||Low||High||N/A||N/A||High||High||N/A||N/A||N/A||N/A||N/A|
|Extraversion (vs. introversion)|
|Warmth (vs. coldness)||Low||Low||Low||N/A||N/A||N/A||Low||N/A||High||N/A||Low||Low||N/A||High|
|Gregariousness (vs. withdrawal)||Low||Low||Low||N/A||N/A||High||N/A||Low||N/A||N/A||N/A||Low||N/A||High|
|Assertiveness (vs. submissiveness)||N/A||N/A||N/A||High||N/A||N/A||High||Low||Low||N/A||Low||N/A||N/A||N/A|
|Activity (vs. passivity)||N/A||Low||N/A||High||N/A||High||N/A||N/A||N/A||N/A||Low||N/A||High||N/A|
|Excitement seeking (vs. lifeless)||N/A||Low||N/A||High||N/A||High||High||Low||N/A||Low||N/A||Low||N/A||High|
|Positive emotionality (vs. anhedonia)||N/A||Low||Low||N/A||N/A||High||N/A||Low||N/A||N/A||N/A||N/A||N/A||High|
|Open-mindedness (vs. closed-minded)|
|Fantasy (vs. concrete)||N/A||N/A||High||N/A||N/A||High||N/A||N/A||N/A||N/A||N/A||N/A||Low||High|
|Aesthetics (vs. disinterest)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A|
|Feelings (vs. alexithymia)||N/A||Low||N/A||N/A||High||High||Low||N/A||N/A||Low||N/A||N/A||N/A||High|
|Actions (vs. predictable)||Low||Low||N/A||High||High||High||High||Low||N/A||Low||Low||N/A||Low||N/A|
|Ideas (vs. closed-minded)||Low||N/A||High||N/A||N/A||N/A||N/A||N/A||N/A||Low||Low||Low||Low||N/A|
|Values (vs. dogmatic)||Low||High||N/A||N/A||N/A||N/A||N/A||N/A||N/A||Low||N/A||N/A||High||N/A|
|Agreeableness (vs. antagonism)|
|Trust (vs. mistrust)||Low||N/A||N/A||Low||N/A||High||Low||N/A||High||N/A||N/A||Low||High||Low|
|Straightforwardness (vs. deception)||Low||N/A||N/A||Low||N/A||N/A||Low||N/A||N/A||N/A||Low||N/A||High||Low|
|Altruism (vs. exploitative)||Low||N/A||N/A||Low||N/A||N/A||Low||N/A||High||N/A||N/A||N/A||High||Low|
|Compliance (vs. aggression)||Low||N/A||N/A||Low||N/A||N/A||Low||N/A||High||N/A||Low||N/A||High||Low|
|Modesty (vs. arrogance)||N/A||N/A||N/A||Low||N/A||N/A||Low||High||High||N/A||N/A||High||High||Low|
|Tender-mindedness (vs. tough-minded)||Low||N/A||N/A||Low||N/A||N/A||Low||N/A||High||N/A||N/A||N/A||N/A||Low|
|Conscientiousness (vs. disinhibition)|
|Competence (vs. laxness)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||High||Low||N/A||Low||High|
|Order (vs. disorderly)||N/A||N/A||Low||N/A||N/A||N/A||N/A||N/A||N/A||N/A||High||Low||N/A||N/A|
|Dutifulness (vs. irresponsibility)||N/A||N/A||N/A||Low||N/A||N/A||N/A||N/A||N/A||High||Low||High||High||N/A|
|Achievement striving (vs. lackadaisical)||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||N/A||High||N/A||N/A||High||Low|
|Self-discipline (vs. negligence)||N/A||N/A||N/A||Low||N/A||Low||N/A||N/A||N/A||High||Low||N/A||High||Low|
|Deliberation (vs. rashness)||N/A||N/A||N/A||Low||Low||Low||N/A||N/A||N/A||High||N/A||High||High||Low|
Abbreviations used: PPD – Paranoid Personality Disorder, SzPD – Schizoid Personality Disorder, StPD – Schizotypal Personality Disorder, ASPD – Antisocial Personality Disorder, BPD – Borderline Personality Disorder, HPD – Histrionic Personality Disorder, NPD – Narcissistic Personality Disorder, AvPD – Avoidant Personality Disorder, DPD – Dependent Personality Disorder, OCPD – Obsessive-Compulsive Personality Disorder, PAPD – Passive-Aggressive Personality Disorder, DpPD – Depressive Personality Disorder, SDPD – Self-Defeating Personality Disorder, SaPD – Sadistic Personality Disorder, and n/a – not available.
As of 2002, there were over fifty published studies relating the five factor model (FFM) to personality disorders. Since that time, quite a number of additional studies have expanded on this research base and provided further empirical support for understanding the DSM personality disorders in terms of the FFM domains. In her seminal review of the personality disorder literature published in 2007, Lee Anna Clark asserted that "the five-factor model of personality is widely accepted as representing the higher-order structure of both normal and abnormal personality traits".
The five factor model has been shown to significantly predict all 10 personality disorder symptoms and outperform the Minnesota Multiphasic Personality Inventory (MMPI) in the prediction of borderline, avoidant, and dependent personality disorder symptoms.
Research results examining the relationships between the FFM and each of the ten DSM personality disorder diagnostic categories are widely available. For example, in a study published in 2003 titled "The five-factor model and personality disorder empirical literature: A meta-analytic review", the authors analyzed data from 15 other studies to determine how personality disorders are different and similar, respectively, with regard to underlying personality traits. In terms of how personality disorders differ, the results showed that each disorder displays a FFM profile that is meaningful and predictable given its unique diagnostic criteria. With regard to their similarities, the findings revealed that the most prominent and consistent personality dimensions underlying a large number of the personality disorders are positive associations with neuroticism and negative associations with agreeableness.
Openness to experienceEdit
At least three aspects of openness to experience are relevant to understanding personality disorders: cognitive distortions, lack of insight and impulsivity. Problems related to high openness that can cause problems with social or professional functioning are excessive fantasising, peculiar thinking, diffuse identity, unstable goals and nonconformity with the demands of the society.
High openness is characteristic to schizotypal personality disorder (odd and fragmented thinking), narcissistic personality disorder (excessive self-valuation) and paranoid personality disorder (sensitivity to external hostility). Lack of insight (shows low openness) is characteristic to all personality disorders and could explain the persistence of maladaptive behavioral patterns.
The problems associated with low openness are difficulties adapting to change, low tolerance for different worldviews or lifestyles, emotional flattening, alexithymia and a narrow range of interests. Rigidity is the most obvious aspect of (low) openness among personality disorders and that shows lack of knowledge of one's emotional experiences. It is most characteristic of obsessive-compulsive personality disorder; the opposite of it known as impulsivity (here: an aspect of openness that shows a tendency to behave unusually or autistically) is characteristic of schizotypal and borderline personality disorders.
Causes and risk factorsEdit
Currently, there are no definitive proven causes for personality disorders. However, there are numerous possible causes and known risk factors supported by scientific research that vary depending on the disorder, the individual, and the circumstance. Overall, findings show that genetic disposition and life experiences, such as trauma and abuse, play a key role in the development of personality disorders.
Child abuse and neglect consistently show up as risk factors to the development of personality disorders in adulthood. A study looked at retrospective reports of abuse of participants that had demonstrated psychopathology throughout their life and were later found to have past experience with abuse. In a study of 793 mothers and children, researchers asked mothers if they had screamed at their children, and told them that they did not love them or threatened to send them away. Children who had experienced such verbal abuse were three times as likely as other children (who did not experience such verbal abuse) to have borderline, narcissistic, obsessive-compulsive or paranoid personality disorders in adulthood. The sexually abused group demonstrated the most consistently elevated patterns of psychopathology. Officially verified physical abuse showed an extremely strong correlation with the development of antisocial and impulsive behavior. On the other hand, cases of abuse of the neglectful type that created childhood pathology were found to be subject to partial remission in adulthood.
Socioeconomic status has also been looked at as a potential cause for personality disorders. There is a strong association with low parental/neighborhood socioeconomic status and personality disorder symptoms. In a recent study comparing parental socioeconomic status and a child's personality, it was seen that children who were from higher socioeconomic backgrounds were more altruistic, less risk seeking, and had overall higher IQs. These traits correlate with a low risk of developing personality disorders later on in life. In a study looking at female children who were detained for disciplinary actions found that psychological problems were most negatively associated with socioeconomic problems. Furthermore, social disorganization was found to be inversely correlated with personality disorder symptoms.
Evidence shows personality disorders may begin with parental personality issues. These cause the parent to have their own difficulties in adulthood, such as difficulties reaching higher education, obtaining jobs, and securing dependable relationships. By either genetic or modeling mechanisms, children can pick up these traits. Additionally, poor parenting appears to have symptom elevating effects on personality disorders. More specifically, lack of maternal bonding has also been correlated with personality disorders. In a study comparing 100 healthy individuals to 100 borderline personality disorder patients, analysis showed that BPD patients were significantly more likely not to have been breastfed as a baby (42.4% in BPD vs. 9.2% in healthy controls). These researchers suggested this act may be essential in fostering maternal relationships. Additionally, findings suggest personality disorders show a negative correlation with two attachment variables: maternal availability and dependability. When left unfostered, other attachment and interpersonal problems occur later in life ultimately leading to development of personality disorders.
Currently, genetic research for the understanding of the development of personality disorders is severely lacking. However, there are a few possible risk factors currently in discovery. Researchers are currently looking into genetic mechanisms for traits such as aggression, fear and anxiety, which are associated with diagnosed individuals. More research is being conducted into disorder specific mechanisms.
The prevalence of personality disorder in the general community was largely unknown until surveys starting from the 1990s. In 2008 the median rate of diagnosable PD was estimated at 10.6%, based on six major studies across three nations. This rate of around one in ten, especially as associated with high use of services, is described as a major public health concern requiring attention by researchers and clinicians.
The prevalence of individual personality disorders ranges from about 2% to 3% for the more common varieties, such as schizotypal, antisocial, borderline, and histrionic, to 0.5–1% for the least common, such as narcissistic and avoidant.
A screening survey across 13 countries by the World Health Organization using DSM-IV criteria, reported in 2009 a prevalence estimate of around 6% for personality disorders. The rate sometimes varied with demographic and socioeconomic factors, and functional impairment was partly explained by co-occurring mental disorders. In the US, screening data from the National Comorbidity Survey Replication between 2001 and 2003, combined with interviews of a subset of respondents, indicated a population prevalence of around 9% for personality disorders in total. Functional disability associated with the diagnoses appeared to be largely due to co-occurring mental disorders (Axis I in the DSM).
A UK national epidemiological study (based on DSM-IV screening criteria), reclassified into levels of severity rather than just diagnosis, reported in 2010 that the majority of people show some personality difficulties in one way or another (short of threshold for diagnosis), while the prevalence of the most complex and severe cases (including meeting criteria for multiple diagnoses in different clusters) was estimated at 1.3%. Even low levels of personality symptoms were associated with functional problems, but the most severely in need of services was a much smaller group.
|Type of personality disorder||Predominant sex|
|Paranoid personality disorder||Male|
|Schizoid personality disorder||Male|
|Schizotypal personality disorder||Male|
|Antisocial personality disorder||Male|
|Borderline personality disorder||Female|
|Histrionic personality disorder||Female|
|Narcissistic personality disorder||Male|
|Avoidant personality disorder||Male|
|Dependent personality disorder||Female|
|Depressive personality disorder||Female|
|Passive-aggressive personality disorder||Male|
|Obsessive-compulsive personality disorder||Male|
|Self-defeating personality disorder||Female|
|Sadistic personality disorder||Male|
There is a considerable personality disorder diagnostic co-occurrence. Patients who meet the DSM-IV-TR diagnostic criteria for one personality disorder are likely to meet the diagnostic criteria for another. Diagnostic categories provide clear, vivid descriptions of discrete personality types but the personality structure of actual patients might be more accurately described by a constellation of maladaptive personality traits.
|Type of Personality Disorder||PPD||SzPD||StPD||ASPD||BPD||HPD||NPD||AvPD||DPD||OCPD||PAPD|
Sites used DSM-III-R criterion sets. Data obtained for purposes of informing the development of the DSM-IV-TR personality disorder diagnostic criteria.
Abbreviations used: PPD – Paranoid Personality Disorder, SzPD – Schizoid Personality Disorder, StPD – Schizotypal Personality Disorder, ASPD – Antisocial Personality Disorder, BPD – Borderline Personality Disorder, HPD – Histrionic Personality Disorder, NPD – Narcissistic Personality Disorder, AvPD – Avoidant Personality Disorder, DPD – Dependent Personality Disorder, OCPD – Obsessive-Compulsive Personality Disorder, PAPD – Passive-Aggressive Personality Disorder.
There are many different forms (modalities) of treatment used for personality disorders:
- Individual psychotherapy has been a mainstay of treatment. There are long-term and short-term (brief) forms.
- Family therapy, including couples therapy.
- Group therapy for personality dysfunction is probably the second most used.
- Psychological-education may be used as an addition.
- Self-help groups may provide resources for personality disorders.
- Psychiatric medications for treating symptoms of personality dysfunction or co-occurring conditions.
- Milieu therapy, a kind of group-based residential approach, has a history of use in treating personality disorders, including therapeutic communities.
- The practice of mindfulness that includes developing the ability to be nonjudgmentally aware of unpleasant emotions appears to be a promising clinical tool for managing different types of personality disorders.
There are different specific theories or schools of therapy within many of these modalities. They may, for example, emphasize psychodynamic techniques, or cognitive or behavioral techniques. In clinical practice, many therapists use an 'eclectic' approach, taking elements of different schools as and when they seem to fit to an individual client. There is also often a focus on common themes that seem to be beneficial regardless of techniques, including attributes of the therapist (e.g. trustworthiness, competence, caring), processes afforded to the client (e.g. ability to express and confide difficulties and emotions), and the match between the two (e.g. aiming for mutual respect, trust and boundaries).
|Cluster||Evidence for Brain Dysfunction||Response to Biological Treatments||Response to Psychosocial Treatments|
|A||Evidence for relationship to schizophrenia; otherwise none known||Schizotypal patients may improve on antipsychotic medication; otherwise not indicated||Poor. Supportive psychotherapy may help|
|B||Evidence for relationship to bipolar disorder; otherwise none known||Antidepressants, antipsychotics, or mood stabilizers may help for borderline personality; otherwise not indicated||Poor in antisocial personality. Variable in borderline, narcissistic, and histrionic personalities|
|C||Evidence for relationship to generalized anxiety disorder; otherwise none known||No direct response. Medications may help with comorbid anxiety and depression||Most common treatment for these disorders. Response variable|
The management and treatment of personality disorders can be a challenging and controversial area, for by definition the difficulties have been enduring and affect multiple areas of functioning. This often involves interpersonal issues, and there can be difficulties in seeking and obtaining help from organizations in the first place, as well as with establishing and maintaining a specific therapeutic relationship. On the one hand, an individual may not consider themselves to have a mental health problem, while on the other, community mental health services may view individuals with personality disorders as too complex or difficult, and may directly or indirectly exclude individuals with such diagnoses or associated behaviors. The disruptiveness that people with personality disorders can create in an organisation makes these, arguably, the most challenging conditions to manage.
Apart from all these issues, an individual may not consider their personality to be disordered or the cause of problems. This perspective may be caused by the patient's ignorance or lack of insight into their own condition, an ego-syntonic perception of the problems with their personality that prevents them from experiencing it as being in conflict with their goals and self-image, or by the simple fact that there is no distinct or objective boundary between 'normal' and 'abnormal' personalities. Unfortunately, there is substantial social stigma and discrimination related to the diagnosis.
The term 'personality disorder' encompasses a wide range of issues, each with a different level of severity or disability; thus, personality disorders can require fundamentally different approaches and understandings. To illustrate the scope of the matter, consider that while some disorders or individuals are characterized by continual social withdrawal and the shunning of relationships, others may cause fluctuations in forwardness. The extremes are worse still: at one extreme lie self-harm and self-neglect, while at another extreme some individuals may commit violence and crime. There can be other factors such as problematic substance use or dependency or behavioral addictions. A person may meet the criteria for multiple personality disorder diagnoses and/or other mental disorders, either at particular times or continually, thus making coordinated input from multiple services a potential requirement.
Therapists in this area can become disheartened by lack of initial progress, or by apparent progress that then leads to setbacks. Clients may be perceived as negative, rejecting, demanding, aggressive or manipulative. This has been looked at in terms of both therapist and client; in terms of social skills, coping efforts, defense mechanisms, or deliberate strategies; and in terms of moral judgments or the need to consider underlying motivations for specific behaviors or conflicts. The vulnerabilities of a client, and indeed a therapist, may become lost behind actual or apparent strength and resilience. It is commonly stated that there is always a need to maintain appropriate professional personal boundaries, while allowing for emotional expression and therapeutic relationships. However, there can be difficulty acknowledging the different worlds and views that both the client and therapist may live with. A therapist may assume that the kinds of relationships and ways of interacting that make them feel safe and comfortable have the same effect on clients. As an example of one extreme, people who may have been exposed to hostility, deceptiveness, rejection, aggression or abuse in their lives, may in some cases be made confused, intimidated or suspicious by presentations of warmth, intimacy or positivity. On the other hand, reassurance, openness and clear communication are usually helpful and needed. It can take several months of sessions, and perhaps several stops and starts, to begin to develop a trusting relationship that can meaningfully address a client's issues.
Before the 20th century
Personality disorder is a term with a distinctly modern meaning, owing in part to its clinical usage and the institutional character of modern psychiatry. The currently accepted meaning must be understood in the context of historical changing classification systems such as DSM-IV and its predecessors. Although highly anachronistic, and ignoring radical differences in the character of subjectivity and social relations, some have suggested similarities to other concepts going back to at least the ancient Greeks.:35 For example, the Greek philosopher Theophrastus described 29 'character' types that he saw as deviations from the norm, and similar views have been found in Asian, Arabic and Celtic cultures. A long-standing influence in the Western world was Galen's concept of personality types, which he linked to the four humours proposed by Hippocrates.
Such views lasted into the eighteenth century, when experiments began to question the supposed biologically based humours and 'temperaments'. Psychological concepts of character and 'self' became widespread. In the nineteenth century, 'personality' referred to a person's conscious awareness of their behavior, a disorder of which could be linked to altered states such as dissociation. This sense of the term has been compared to the use of the term 'multiple personality disorder' in the first versions of the DSM.
Physicians in the early nineteenth century started to diagnose forms of insanity involving disturbed emotions and behaviors but seemingly without significant intellectual impairment or delusions or hallucinations. Philippe Pinel referred to this as 'manie sans délire' – mania without delusions – and described a number of cases mainly involving excessive or inexplicable anger or rage. James Cowles Prichard advanced a similar concept he called moral insanity, which would be used to diagnose patients for some decades. 'Moral' in this sense referred to affect (emotion or mood) rather than ethics, but it was arguably based in part on religious, social and moral beliefs, with a pessimism about medical intervention so social control should take precedence. These categories were much different and broader than later definitions of personality disorder, while also being developed by some into a more specific meaning of moral degeneracy akin to later ideas about 'psychopaths'. Separately, Richard von Krafft-Ebing popularized the terms sadism and masochism, as well as homosexuality, as psychiatric issues.
The German psychiatrist Koch sought to make the moral insanity concept more scientific, and in 1891 suggested the phrase 'psychopathic inferiority', theorized to be a congenital disorder. This referred to continual and rigid patterns of misconduct or dysfunction in the absence of apparent mental retardation or illness, supposedly without a moral judgment. Described as deeply rooted in his Christian faith, his work has been described as a fundamental text on personality disorders that is still of use today.
In the early 20th century, another German psychiatrist, Emil Kraepelin, included a chapter on psychopathic inferiority in his influential work on clinical psychiatry for students and physicians. He suggested six types – excitable, unstable, eccentric, liar, swindler and quarrelsome. The categories were essentially defined by the most disordered criminal offenders observed, distinguished between criminals by impulse, professional criminals, and morbid vagabonds who wandered through life. Kraepelin also described three paranoid (meaning then delusional) disorders, resembling later concepts of schizophrenia, delusional disorder and paranoid personality disorder. A diagnostic term for the latter concept would be included in the DSM from 1952, and from 1980 the DSM would also include schizoid, schizotypal; interpretations of earlier (1921) theories of Ernst Kretschmer led to a distinction between these and another type later included in the DSM, avoidant personality disorder.
In 1933 Russian psychiatrist Pyotr Borisovich Gannushkin published his book Manifestations of psychopathies: statics, dynamics, systematic aspects, which was one of the first attempts to develop a detailed typology of psychopathies. Regarding maladaptation, ubiquity, and stability as the three main symptoms of behavioral pathology, he distinguished nine clusters of psychopaths: cycloids (including constitutionally depressive, constitutionally excitable, cyclothymics, and emotionally labile), asthenics (including psychasthenics), schizoids (including dreamers), paranoiacs (including fanatics), epileptoids, hysterical personalities (including pathological liars), unstable psychopaths, antisocial psychopaths, and constitutionally stupid. Some elements of Gannushkin's typology were later incorporated into the theory developed by a Russian adolescent psychiatrist, Andrey Yevgenyevich Lichko, who was also interested in psychopathies along with their milder forms, the so-called accentuations of character.
In 1939, psychiatrist David Henderson published a theory of 'psychopathic states' that contributed to popularly linking the term to anti-social behavior. Hervey M. Cleckley’s 1941 text, The Mask of Sanity, based on his personal categorization of similarities he noted in some prisoners, marked the start of the modern clinical conception of psychopathy and its popularist usage.
Towards the mid 20th century, psychoanalytic theories were coming to the fore based on work from the turn of the century being popularized by Sigmund Freud and others. This included the concept of character disorders, which were seen as enduring problems linked not to specific symptoms but to pervasive internal conflicts or derailments of normal childhood development. These were often understood as weaknesses of character or willful deviance, and were distinguished from neurosis or psychosis. The term 'borderline' stems from a belief some individuals were functioning on the edge of those two categories, and a number of the other personality disorder categories were also heavily influenced by this approach, including dependent, obsessive-compulsive and histrionic, the latter starting off as a conversion symptom of hysteria particularly associated with women, then a hysterical personality, then renamed histrionic personality disorder in later versions of the DSM. A passive aggressive style was defined clinically by Colonel William Menninger during World War II in the context of men's reactions to military compliance, which would later be referenced as a personality disorder in the DSM. Otto Kernberg was influential with regard to the concepts of borderline and narcissistic personalities later incorporated in 1980 as disorders into the DSM.
Meanwhile, a more general personality psychology had been developing in academia and to some extent clinically. Gordon Allport published theories of personality traits from the 1920s—and Henry Murray advanced a theory called personology, which influenced a later key advocate of personality disorders, Theodore Millon. Tests were developing or being applied for personality evaluation, including projective tests such as the Rorshach, as well as questionnaires such as the Minnesota Multiphasic Personality Inventory. Around mid-century, Hans Eysenck was analysing traits and personality types, and psychiatrist Kurt Schneider was popularising a clinical use in place of the previously more usual terms 'character', 'temperament' or 'constitution'.
American psychiatrists officially recognised concepts of enduring personality disturbances in the first Diagnostic and Statistical Manual of Mental Disorders in the 1950s, which relied heavily on psychoanalytic concepts. Somewhat more neutral language was employed in the DSM-II in 1968, though the terms and descriptions had only a slight resemblance to current definitions. The DSM-III published in 1980 made some major changes, notably putting all personality disorders onto a second separate 'axis' along with mental retardation, intended to signify more enduring patterns, distinct from what were considered axis one mental disorders. 'Inadequate' and 'asthenic' personality disorder' categories were deleted, and others were expanded into more types, or changed from being personality disorders to regular disorders. Sociopathic personality disorder, which had been the term for psychopathy, was renamed Antisocial Personality Disorder. Most categories were given more specific 'operationalized' definitions, with standard criteria psychiatrists could agree on to conduct research and diagnose patients. In the DSM-III revision, self-defeating personality disorder and sadistic personality disorder were included as provisional diagnoses requiring further study. They were dropped in the DSM-IV, though a proposed 'depressive personality disorder' was added; in addition, the official diagnosis of passive-aggressive personality disorder was dropped, tentatively renamed 'negativistic personality disorder.'
International differences have been noted in how attitudes have developed towards the diagnosis of personality disorder. Kurt Schneider argued they were 'abnormal varieties of psychic life' and therefore not necessarily the domain of psychiatry, a view said to still have influence in Germany today. British psychiatrists have also been reluctant to address such disorders or consider them on par with other mental disorders, which has been attributed partly to resource pressures within the National Health Service, as well as to negative medical attitudes towards behaviors associated with personality disorders. In the US, the prevailing healthcare system and psychanalytic tradition has been said to provide a rationale for private therapists to diagnose some personality disorders more broadly and provide ongoing treatment for them.
- American Psychiatric Association (2013), Diagnostic and Statistical Manual of Mental Disorders (5th ed.), Arlington: American Psychiatric Publishing, p. 646, ISBN 0890425558
- American Psychiatric Association (2013). Diagnostic and Statistical Manual of Mental Disorders (Fifth ed.). Arlington, VA: American Psychiatric Publishing. pp. 646–49. ISBN 978-0-89042-555-8.
- Berrios, G E (1993). "European views on personality disorders: a conceptual history". Comprehensive Psychiatry. 34 (1): 14–30. doi:10.1016/0010-440X(93)90031-X. PMID 8425387.
- Theodore Millon; Roger D. Davis (1996). Disorders of Personality: DSM-IV and Beyond. New York: John Wiley & Sons, Inc. p. 226. ISBN 0-471-01186-X.
- Henning Saß (2001). "Personality Disorders" (pp. 11301–08). doi:10.1016/B0-08-043076-7/03763-3
- Otto Kernberg (1984). Severe Personality Disorders. New Haven, CT: Yale University Press, ISBN 0300053495.
- Ullrich, Simone (2007). "Dimensions of DSM-IV Personality Disorders and Life-Success" (PDF). Journal of Personality Disorders. 21 (6): 657–63. doi:10.1521/pedi.2007.21.6.657.
- Nancy McWilliams (29 July 2011). Psychoanalytic Diagnosis, Second Edition: Understanding Personality Structure in the Clinical Process. Guilford Press. pp. 196–. ISBN 978-1-60918-494-0.
- Widiger TA (October 2003). "Personality disorder diagnosis". World Psychiatry. 2 (3): 131–35. PMC . PMID 16946918.
- "Disorders of adult personality and behaviour (F60–F69)". The ICD-10 Classification of Mental and Behavioural Disorders – Clinical descriptions and diagnostic guidelines (PDF). WHO (2010). pp. 157–58.
- American Psychiatric Association (2013). "Personality Disorders". Diagnostic and Statistical Manual of Mental Disorders (Fifth ed.). pp. 645–84. doi:10.1176/appi.books.9780890425596.156852. ISBN 978-0-89042-555-8.
- WHO (2010) ICD-10: Specific Personality Disorders
- "International Statistical Classification of Diseases and Related Health Problems 10th Revision (ICD-10) Version for 2010 (Online Version)"[permanent dead link]. Apps.who.int. Retrieved on 16 April 2013.
- Tyrer, Peter; Reed, Geoffrey M; Crawford, Mike J (February 2015). "Classification, assessment, prevalence, and effect of personality disorder". The Lancet. 385 (9969): 717–26. doi:10.1016/S0140-6736(14)61995-4. PMID 25706217.
- ICD-11 Beta Draft. who.int
- A Guide to DSM-5: Personality Disorders Medscape Psychiatry, Bret S. Stetka, MD, Christoph U. Correll, 21 May 2013
- Esterberg, Michelle L.; Goulding, Sandra M.; Walker, Elaine F. (5 May 2010). "Cluster A Personality Disorders: Schizotypal, Schizoid and Paranoid Personality Disorders in Childhood and Adolescence". Journal of Psychopathology and Behavioral Assessment. 32 (4): 515–28. doi:10.1007/s10862-010-9183-8. PMC . PMID 21116455.
- Fuller, AK, Blashfield, RK, Miller, M, Hester, T (1992). "Sadistic and self-defeating personality disorder criteria in a rural clinic sample". Journal of Clinical Psychology. 48 (6): 827–31. doi:10.1002/1097-4679(199211)48:6<827::AID-JCLP2270480618>3.0.CO;2-1. PMID 1452772.
- Theodore Millon (2004). Personality Disorders in Modern Life Archived 7 February 2017 at the Wayback Machine.. Wiley, 2nd Edition. ISBN 0-471-23734-5. (GoogleBooks Preview).
- Widiger, Thomas (2012). The Oxford Handbook of Personality Disorders. Oxford University Press. ISBN 978-0-19-973501-3.
- Bressert, Steve. Paranoid Personality Disorder Symptoms. psychcentral.com
- "Overview – Schizoid personality disorder". Mayo Clinic. 12 July 2016. Retrieved 28 December 2016.
- "Overview – Schizotypal personality disorder". Mayo Clinic. 1 April 2016. Retrieved 28 December 2016.
- Bressert, Steve. Antisocial Personality Disorder Symptoms. psychcentral.com
- Bressert, Steve. Borderline Personality Disorder Symptoms. psychcentral.com
- "Histrionic Personality Disorder". psychologytoday.com.
- Bressert, Steve. Avoidant Personality Disorder Symptoms. psychcentral.com
- Bressert, Steve. Dependent Personality Disorder Symptoms. psychcentral.com
- Grohol, John. "Depression." psychcentral.com.
- Brandt, Andrea. "8 Keys to Eliminating Passive-Aggressiveness." psychcentral.com.
- Randle, K. (2008). Masochism and Where it Comes From. Psych Central.
- Murray, Robin M. et al (2008). Psychiatry. Fourth Edition. Cambridge University Press. ISBN 978-0-521-60408-6.
- Tyrer, P. (2000) Personality Disorders: Diagnosis, Management and Course. Second Edition. London: Arnold Publishers Ltd., pp. 126–32. ISBN 9780723607366.
- Nur, U., Tyrer, P., Merson, S., & Johnson, T. (2004). "Relationship between clinical symptoms, personality disturbance, and social function: a statistical enquiry". Irish Journal of Psychological Medicine. 21: 19–22.
- Tyrer, P. & Alexander, J. (1979). "Classification of Personality Disorder". British Journal of Psychiatry. 135 (2): 238–42. doi:10.1192/bjp.135.2.163. PMID 486849.
- Tyrer, P., Mitchard, S., Methuen, C., & Ranger, M. (2003). "Treatment-rejecting and treatment-seeking personality disorders: Type R and Type S". Journal of Personality Disorders. 17 (3): 263–68. doi:10.1521/pedi.17.3.263.22152. PMID 12839104.
- Paul M. G. Emmelkamp (2013). Personality Disorders. Psychology Press. pp. 54–56. ISBN 978-1-317-83477-9.
- Svenn Torgersen (2014) (2014). "Prevalence, Sociodemographics and Functional Impairment". The American Psychiatric Publishing textbook of personality disorders (Second ed.). Washington, DC. pp. 122–26. ISBN 9781585624560. OCLC 601366312.
- Ettner, Susan L. (2011). "Personality Disorders and Work." In Work Accommodation and Retention in Mental Health, Chapter 9
- Ettner, Susan L.; Maclean, Johanna Catherine; French, Michael T. (1 January 2011). "Does Having a Dysfunctional Personality Hurt Your Career? Axis II Personality Disorders and Labor Market Outcomes". Industrial Relations: A Journal of Economy and Society. 50 (1): 149–73. doi:10.1111/j.1468-232X.2010.00629.x.
- Board, Belinda Jane; Fritzon, Katarina (2005). "Disordered personalities at work". Psychology Crime and Law. 11: 17–32. doi:10.1080/10683160310001634304.
- de Vries; Manfred F. R. Kets (2003). "The Dark Side of Leadership". Business Strategy Review. 14 (3): 26. doi:10.1111/1467-8616.00269.
- Krueger, R.; Carlson, Scott R. (2001). "Personality disorders in children and adolescents". Current Psychiatry Reports. 3 (1): 46–51. doi:10.1007/s11920-001-0072-4. PMID 11177759.
- Tasman, Allan et al (2008). Psychiatry. Third Edition. John Wiley & Sons, Ltd. ISBN 978-0470-06571-6.
- Widiger, T. A. (1993). "The DSM-III-R categorical personality disorder diagnoses: A critique and an alternative". Psychological Inquiry. 4 (2): 75–90. doi:10.1207/s15327965pli0402_1.
- Costa, P.T., & Widiger, T.A. (2001). Personality disorders and the five-factor model of personality (2nd ed.). Washington, DC: American Psychological Association.
- Samuel, D.B.; Widiger, T.A. (2008). "A meta-analytic review of the relationships between the five-factor model and DSM personality disorders: A facet level analysis". Clinical Psychology Review. 28 (8): 1326–42. doi:10.1016/j.cpr.2008.07.002. PMC . PMID 18708274.
- Widiger, Thomas A., Costa, Paul T. (2012). Personality Disorders and the Five-Factor Model of Personality, Third Edition. ISBN 978-1-4338-1166-1.
- Widiger TA, Costa PT., Jr. (2002) "Five-Factor model personality disorder research", pp. 59–87 in Costa Paul T, Jr, Widiger Thomas A. (eds.) Personality disorders and the five-factor model of personality. 2nd ed. Washington, DC: American Psychological Association. ISBN 978-1-55798-826-3.
- Mullins-Sweatt SN, Widiger TA (2006). "The five-factor model of personality disorder: A translation across science and practice", pp. 39–70 in Krueger R, Tackett J (eds.). Personality and psychopathology: Building bridges. New York: Guilford.
- Clark, L. A. (2007). "Assessment and diagnosis of personality disorder: Perennial issues and an emerging reconceptualization". Annual Review of Psychology. 58: 227–57. doi:10.1146/annurev.psych.57.102904.190200. PMID 16903806.
- Bagby, R. Michael; Sellbom, Martin; Costa, Paul T.; Widiger, Thomas A. (2008). "Predicting Diagnostic and Statistical Manual of Mental Disorders-IV personality disorders with the five-factor model of personality and the personality psychopathology five". Personality and Mental Health. 2 (2): 55–69. doi:10.1002/pmh.33.
- Saulsman, L. M.; Page, A. C. (2004). "The five-factor model and personality disorder empirical literature: A meta-analytic review". Clinical Psychology Review. 23 (8): 1055–85. doi:10.1016/j.cpr.2002.09.001. PMID 14729423.
- Piedmont, R. L., Sherman, M. F., Sherman, N. C. (2012). "Maladaptively High and Low Openness: The Case for Experiential Permeability". Journal of Personality. 80 (6): 1641–68. doi:10.1111/j.1467-6494.2012.00777.x. PMID 22320184.
- Piedmont, R. L., Sherman, M. F., Sherman, N. C., Dy-Liacco, G. S., Williams, J. E. G. (2009). "Using the Five-Factor Model to Identify a New Personality Disorder Domain: The Case for Experiential Permeability". Journal of Personality and Social Psychology. 96 (6): 1245–58. doi:10.1037/a0015368. PMID 19469599.
- Cohen, Patricia; Brown, Jocelyn; Smailes, Elizabeth (2001). "Child Abuse and Neglect and the Development of Mental Disorders in the General Population". Development and Psychopathology. 13 (4): 981–99. PMID 11771917.
- "What Causes Psychological Disorders?". American Psychological Association. 2010. Archived from the original on 20 November 2010.
- Cohen, Patricia; Chen, Henian; Gordon, Kathy; Johnson, Jeffrey; Brook, Judith; Kasen, Stephanie (April 2008). "Socioeconomic background and the developmental course of schizotypal and borderline personality disorder symptoms". Development and Psychopathology. 20 (2): 633–50. doi:10.1017/S095457940800031X. ISSN 1469-2198. PMC .
- Deckers, Thomas (February 2015). "How does Socio-Economic Status Shape a Child's Personality?" (PDF). Human Capital and Economic Opportunity Global Working Group.
- Damme, Lore Van; Colins, Olivier; Maeyer, Jessica De; Vermeiren, Robert; Vanderplasschen, Wouter (1 June 2015). "Girls' quality of life prior to detention in relation to psychiatric disorders, trauma exposure and socioeconomic status". Quality of Life Research. 24 (6): 1419–29. doi:10.1007/s11136-014-0878-2. ISSN 0962-9343.
- Walsh, Zach; Shea, M. Tracie; Yen, Shirley; Ansell, Emily B.; Grilo, Carlos M.; McGlashan, Thomas H.; Stout, Robert L.; Bender, Donna S.; Skodol, Andrew E. (17 September 2012). "Socioeconomic-Status and Mental Health in a Personality Disorder Sample: The Importance of Neighborhood Factors". Journal of Personality Disorders. 27 (6): 820–31. doi:10.1521/pedi_2012_26_061. ISSN 0885-579X.
- Schwarze, Cornelia E.; Hellhammer, Dirk H.; Stroehle, Verena; Lieb, Klaus; Mobascher, Arian (23 September 2014). "Lack of Breastfeeding: A Potential Risk Factor in the Multifactorial Genesis of Borderline Personality Disorder and Impaired Maternal Bonding". Journal of Personality Disorders. 29 (5): 610–26. doi:10.1521/pedi_2014_28_160. ISSN 0885-579X.
- Michelle, Ball, Ericka (20 October 2016). "The Moderating Role of Maternal Attachment on Borderline Personality Disorder Features and Dependent Life Stress".
- "What causes personality disorders?". American Psychological Association. Archived from the original on 13 March 2010. Retrieved 9 August 2017.
- Lenzenweger, Mark F. (2008). "Epidemiology of Personality Disorders". Psychiatric Clinics of North America. 31 (3): 395–403. doi:10.1016/j.psc.2008.03.003. PMID 18638642.
- Huang, Y.; Kotov, R.; de Girolamo, G.; Preti, A.; Angermeyer, M.; Benjet, C.; Demyttenaere, K.; de Graaf, R.; Gureje, O.; Karam, A. N.; Lee, S.; Lepine, J. P.; Matschinger, H.; Posada-Villa, J.; Suliman, S.; Vilagut, G.; Kessler, R. C. (30 June 2009). "DSM-IV personality disorders in the WHO World Mental Health Surveys". The British Journal of Psychiatry. 195 (1): 46–53. doi:10.1192/bjp.bp.108.058552. PMC . PMID 19567896.
- Lenzenweger, Mark F.; Lane, Michael C.; Loranger, Armand W.; Kessler, Ronald C. (2006). "DSM-IV Personality Disorders in the National Comorbidity Survey Replication". Biological Psychiatry. 62 (6): 553–64. doi:10.1016/j.biopsych.2006.09.019. PMC . PMID 17217923.
- Yang, M.; Coid, J.; Tyrer, P. (31 August 2010). "Personality pathology recorded by severity: national survey". The British Journal of Psychiatry. 197 (3): 193–99. doi:10.1192/bjp.bp.110.078956. PMID 20807963.
- Connolly, Adrian J. (2008). "Personality disorders in homeless drop-in center clients" (PDF). Journal of Personality Disorders. 22 (6): 573–88. doi:10.1521/pedi.2008.22.6.573. PMID 19072678.
With regard to Axis II, Cluster A personality disorders (paranoid, schizoid, schizotypal) were found in almost all participants (92% had at least one diagnosis), and Cluster B (83% had at least one of antisocial, borderline, histrionic, or narcissistic) and C (68% had at least one of avoidant, dependent, obsessive-compulsive) disorders also were highly prevalent.
- Magnavita, Jeffrey J. (2004) Handbook of personality disorders: theory and practice, John Wiley and Sons, ISBN 978-0-471-48234-5.
- Sng AA, Janca A (2016). "Mindfulness for personality disorders". Current Opinion in Psychiatry. 29 (1): 70–76. doi:10.1097/YCO.0000000000000213. PMID 26651010.
- Creswell J.D. (2016). "Mindfulness Interventions". Annual Review of Psychology. 68: 70–76. doi:10.1146/annurev-psych-042716-051139. PMID 27687118.
- Davison, S. E. (2002). "Principles of managing patients with personality disorder". Advances in Psychiatric Treatment. 8 (1): 1–9. doi:10.1192/apt.8.1.1.
- McVey, D. & Murphy, N. (eds.) (2010) Treating Personality Disorder: Creating Robust Services for People with Complex Mental Health Needs, ISBN 0-203-84115-8
- Suryanarayan, Geetha (2002) The History of the Concept of Personality Disorder and its Classification, The Medicine Publishing Company Ltd.
- Augstein, HF (1996). "J C Prichard's concept of moral insanity—a medical theory of the corruption of human nature". Medical History. 40 (3): 311–43. doi:10.1017/S0025727300061329. PMC . PMID 8757717.
- Gutmann, P (2008). "Julius Ludwig August Koch (1841–1908): Christian, philosopher and psychiatrist". History of Psychiatry. 19 (74 Pt 2): 202–14. doi:10.1177/0957154X07080661. PMID 19127839.
- Ганнушкин П. Б. (2000). Клиника психопатий, их статика, динамика, систематика. Издательство Нижегородской государственной медицинской академии. ISBN 5-86093-015-1.
- Личко А. Е. (2010) Психопатии и акцентуации характера у подростков. Речь, ISBN 978-5-9268-0828-2.
- Arrigo, B. A. (1 June 2001). "The Confusion Over Psychopathy (I): Historical Considerations" (PDF). International Journal of Offender Therapy and Comparative Criminology. 45 (3): 325–44. doi:10.1177/0306624X01453005.[permanent dead link]
- Amy Heim & Drew Westen (2004) Theories of personality and personality disorders Archived 11 January 2012 at the Wayback Machine.
- Lane, C. (1 February 2009). "The Surprising History of Passive-Aggressive Personality Disorder" (PDF). Theory & Psychology. 19 (1): 55–70. doi:10.1177/0959354308101419.
- Hoermann, Simone; Zupanick, Corinne E. and Dombeck, Mark (January 2011) The History of the Psychiatric Diagnostic System Continued. mentalhelp.net.
- Oldham, John M. (2005). "Personality Disorders". FOCUS. 3: 372–82. doi:10.1176/foc.3.3.372 (inactive 2018-01-04).
- Kendell, RE (2002). "The distinction between personality disorder and mental illness". The British Journal of Psychiatry. 180 (2): 110–15. doi:10.1192/bjp.180.2.110.
- Marshall, W. & Serin, R. (1997) Personality Disorders. In Sm.M. Turner & R. Hersen (Eds.) Adult Psychopathology and Diagnosis. New York: Wiley. 508–41
- Murphy, N. & McVey, D. (2010) Treating Severe Personality Disorder: Creating Robust Services for Clients with Complex Mental Health Needs. London: Routledge
- Millon, Theodore (and Roger D. Davis, contributor) – Disorders of Personality: DSM IV and Beyond – 2nd ed. – New York, John Wiley and Sons, 1995 ISBN 0-471-01186-X
- Yudofsky, Stuart C. (2005). Fatal Flaws: Navigating Destructive Relationships With People With Disorders of Personality and Character (1st ed.). Washington, DC. ISBN 1-58562-214-1. |
Smithsonian Videohistory Collection
The History of PCR
The Polymerase Chain Reaction (PCR) technique, invented in 1985 by Kary B. Mullis, allowed scientists to make millions of copies of a scarce sample of DNA. The technique has revolutionized many aspects of current research, including the diagnosis of genetic defects and the detection of the AIDS virus in human cells. The technique is also used by criminologists to link specific persons to samples of blood or hair via DNA comparison. PCR also affected evolutionary studies because large quantities of DNA can be manufactured from fossils containing but trace amounts.
Pamela M. Henson, Office of Smithsonian Institution Archives, interviewed scientists in the museum's Department of Paleobiology who developed its extensive fossil collection. She used the fossil collections to stimulate discussion of the history of the collections and visually documented fossil preparation techniques.
Kary Mullis invented the PCR technique in 1985 while working as a chemist at the Cetus Corporation, a biotechnology firm in Emeryville, California. The procedure requires placing a small amount of the DNA containing the desired gene into a test tube. A large batch of loose nucleotides, which link into exact copies of the original gene, is also added to the tube. A pair of synthesized short DNA segments, that match segments on each side of the desired gene, is added. These "primers" find the right portion of the DNA, and serve as starting points for DNA copying. When the enzyme Thermus aquaticus (Taq) is added, the loose nucleotides lock into a DNA sequence dictated by the sequence of that target gene located between the two primers.
The test tube is heated, and the DNA's double helix separates into two strands. The DNA sequence of each strand of the helix is thus exposed and as the temperature is lowered the primers automatically bind to their complementary portions of the DNA sample. At the same time, the enzyme links the loose nucleotides to the primer and to each of the separated DNA strands in the appropriate sequence. The complete reaction, which takes approximately five minutes, results in two double helices containing the desired portion of the original. The heating and cooling is repeated, doubling the number of DNA copies. After thirty to forty cycles are completed a single copy of a piece of DNA can be multiplied to hundreds of millions.
When completed manually, Mullis' PCR technique was slow and labor-intensive. Therefore, Cetus scientists began looking for ways in which to automate the process. Before the discovery of the thermostable Taq enzyme, scientists needed to add fresh enzyme to each cycle. The first thermocycling machine, "Mr. Cycle" was developed by Cetus engineers to address that need to add fresh enzyme to each test tube after the heating and cooling process. And the purification of the Taq polymerase resulted in the need for a machine to cycle more rapidly among different temperatures. In 1985, Cetus formed a joint venture with the Perkin-Elmer Corporation in Norwalk, Connecticut, and introduced the DNA Thermal Cycler. By 1988, Cetus was receiving numerous inquiries about licensing to perform PCR for commercial diagnostic purposes. On January 15, 1989, Cetus announced an agreement to collaborate with Hoffman-LaRoche on the development and commercialization of in vitro human diagnostic products and services based on PCR technology. Roche Molecular Systems eventually bought the PCR patent and associated technology from Cetus for $300,000,000.
Ramunas Kondratas, curator at the Smithsonian's National Museum of American History (NMAH), documented the discovery, development, commercialization, and applications of PCR technology. Three sessions were recorded May 14 and May 15, 1992 at Emeryville, California; September 25, 1992 at Alameda, California; and February 25, 1993 at Norwalk, Connecticut.
Interviewees included scientists, engineers, and managers from Cetus Corporation, Roche Molecular Systems, and Perkin-Elmer Corporation. Norman Arnheim first became interested in the study of medicine in high school, as the result of a summer spent working at a hospital. He received his B.A. (1960) and M.A. (1962) from the University of Rochester, and his Ph.D. (1966) in Drosophila genetics from the University of California, Berkeley. Currently serving as professor of molecular biology at the University of Southern California, Arnheim formerly worked at Cetus Corporation on PCR. John G. Atwood came to Perkin-Elmer Corporation in November 1948 with a masters' degree in electrical engineering from Columbia University (1948). He currently serves as senior scientist for the biotechnology instrument group.
Peter Barrett holds a B.S. in Chemistry from Lowell Technological Institute and a Ph.D. in Analytical Chemistry from Northeastern University. He joined Perkin-Elmer in 1970 as product specialist in the Instrument Division, was promoted to manager of the Applications Laboratory in 1982, and director of the Laboratory Robotics Department in 1985. In 1988, Barrett was named director of European Marketing and relocated to Italy. In 1989, he moved to Germany to set up the European Sales and Service Center. He returned to the U.S. in 1990 to serve as division vice-president of Instruments and was named vice-president of the Life Sciences Division in 1991. In 1993, in conjunction with the merger with Applied Biosystems Incorporated, he moved to California to become executive vice-president, Applied Biosystems Division.
Joseph L. DiCesare received his Ph.D. in Biochemistry from the University of Rhode Island. In 1976, he accepted the position of assistant product line manager at Perkin-Elmer Corporation and was appointed product line manager of the Gas Chromatography division in 1983. In 1987, he was promoted to the position of Research and Development Applications manager of the Biotechnology Division.
Henry Anthony Erlich received his B.A. in biochemical sciences from Harvard University in 1965 and his Ph.D. in genetics from University of Washington in 1972. He served as a postdoctoral fellow in the Department of Biology at Princeton University from 1972 to 1975 and in the Department of Medicine at Stanford University from 1975 to 1979. He joined the Cetus Corporation in 1979 and was appointed senior scientist and director of Human Genetics in 1981. After the dissolution of Cetus in 1991, Erlich transferred to Roche Molecular Systems to serve as director of Human Genetics.
A few years after graduating from high school, Fred Faloona began working as a research assistant under Kary B. Mullis at the Cetus Corporation, c. 1983. He assisted Mullis with the initial development and application of PCR. He followed Mullis to Xytronyx Incorporated in 1986 where he served as a research associate working on DNA and RNA sequencing and further applications of PCR. In 1988, he returned to Cetus as a research assistant where he worked on the application of PCR to the discovery of new retroviruses and he further refined PCR detection techniques. In 1991, Faloona and a partner began Saddle Point System, a small company designing computer hardware and software.
David H. Gelfand completed his B.A. in Biology at Brandeis University in 1966. After receiving a Ph.D. in Biology from the University of California, San Diego in 1970, he began work as an assistant research biochemist at the University of California in San Francisco. He was offered the position of director of Recombinant Molecular Research at Cetus in 1976 and was promoted to vice-president of that division in 1979. He later accepted positions as vice-president of Scientific Affairs and director of Core Technology, PCR Division, in 1981 and 1988. In 1991, Gelfand also transferred to Roche Molecular Systems to serve as director for the Program in Core Research.
Lawrence Allen Haff received his B.S. in Biochemistry from Michigan State University in 1969. After completing his Ph.D. in Biochemistry from Cornell University in 1974, Haff served as a research fellow in the biological laboratories of Harvard University. In 1976, he accepted the position of senior research scientist at Pharmacia. He transferred to Millipore Corporation in 1982 to serve as technical research manager developing and supporting high performance separation techniques. He joined the Perkin-Elmer Corporation in 1985 as principle scientist and research manager to help develop the DNA Thermal Cycler.
After receiving his B.S. in mechanical engineering from the University of California-Davis in 1978, David C. Jones worked as a stress engineer for the Boeing Commercial Aircraft Company. In 1980, he joined the Bio-Rad Laboratories designing and developing chromatography instruments. He accepted the position of mechanical engineer at Cetus Corporation in 1986 to work on thermocycling instrumentation. He also completed an M.B.A. in management from Golden State University in 1988.
Elena D. Katz was awarded her M.S. degree in Chemistry from Moscow University, Russia. From 1969 to 1972, she studied in the Ph.D. program at the Institute of Physical Chemistry of the Academy of Sciences in Moscow. In 1973, she was appointed associate researcher in the physical chemistry department of Moscow University. After moving to the United States, Katz became Senior Staff Scientist at Perkin-Elmer in 1977 working on various multidisciplinary projects utilizing liquid and gas chromatography. Since 1985, Katz has concurrently pursued a Ph.D. in Chemistry from the University of London. Shirley Kwok began her career as a research associate with the Assay Department of Cetus Corporation after graduating from the University of California, Berkeley, with a degree in microbiology. Kwok was part of a group of researchers devoted to the use of PCR to detect HIV in human cells. Currently, she holds the position of research investigator for Hoffman-La Roche at Roche Molecular Systems.
Richard Leath started with Cetus in 1980, after receiving a masters' degree in electrical engineering from Purdue University in 1974. Leath spent a decade developing machines like Mr. Cycle, and is currently functioning as senior engineer at Maxwell Labs, Richmond, California, a firm which develops particle accelerators.
Kary B. Mullis received his B.S. in Chemistry from the Georgia Institute of Technology in 1966 and his Ph.D. in Biochemistry from the University of California-Berkeley in 1972. In 1973, he was awarded a post-doctoral fellowship in pediatric cardiology at the University of Kansas Medical School. He returned to California in 1977 and was awarded another fellowship in pharmaceutical chemistry from the University of California, San Francisco to research endorphins and the opiate receptor. He accepted the position of scientist at Cetus in 1979 to work in the chemistry department researching oligonucleotide synthesis and chemistry. He transferred to the Department of Human Genetics in 1984 to conduct research on DNA technology. In 1986, Mullis accepted the position of director of Molecular Biology at Xytronyx, Inc. researching DNA technology, photochemistry, and photobiology. He left Xytronyx in 1988 and currently serves as a private consultant to a variety of companies in the field of nucleic chemistry. Mullis won the Nobel Prize in chemistry in 1993 for his invention of the PCR technique.
Lynn H. Pasahow graduated from Stanford University in 1969 and received his law degree from the University of California at Berkeley School of Law in 1972. He joined the firm of McCutchen, Doyle, Brown, and Enersen in 1973, and presently chairs the firm's intellectual property group. He had advised clients and handled complex litigation involving patent, copyright, trademark, trade secret, licensing, export-import, noncompetition, and trade regulation disputes, most involving biotechnology, computer hardware and software and other advanced technology products. He led the group of lawyers which successfully obtained a jury verdict upholding Cetus' landmark polymerase chain reaction patents against the Dupont Company challenge. Enrico Picozza began work with Perkin-Elmer in June 1985, shortly after receiving his degree from the University of Connecticut. Currently, he is working as senior technical specialist, and is devoted to specifying, developing, testing and evaluating instrumentation primarily for the PCR market.
Riccardo Pigliucci earned his degree in chemistry in Milan, Italy and is a graduate of the Management Program at the Northeastern University. He joined Perkin-Elmer in 1966 and held numerous management positions in analytical instrument operations in Europe as well as in the U.S. He was appointed general manager of the U.S. Instrument Division in 1989 after serving as director of Worldwide Instrument Marketing since 1985. In 1988, Pigliucci was appointed a sector vice-president in Connecticut Operations. The following year, he was elected corporate vice-president. Perkin-Elmer Instruments. He became president of the Instrument Group in 1991 and was named senior vice-president of Perkin-Elmer Corporation in 1992. In 1993, he was elected president and chief operating officer. He is also a director of the Corporation.
After receiving his bachelors degree in Chemistry and Biology from the University of Washington in 1978, Randall K. Saiki served one year as a laboratory technician in their Department of Microbiology. In 1979, he transferred to Washington University to serve as a lab technician in the Biology Department. He joined the Cetus Corporation in late 1979 as a research assistant in the Recombinant DNA Group. In 1981, he was promoted to research associate in the Department of Human Genetics and was named scientist in that department in 1989. Saiki transferred to Roche Molecular Systems in 1991 to serve as research investigator in the Department of Human Genetics. Stephen Scharf received a degree in bacteriology from University of California, Davis. He worked there as a biochemist for four and a half years until 1980, when he came to Cetus. Scharf was a research associate in the Department of Human Genetics at Cetus at the time PCR was developed. Currently, he serves as senior scientist at Roche Molecular Systems.
Donna Marie Seyfried graduated from Lehigh University with a B.S. in Microbiology. Her professional career began as a microbiologist for the E.I. Dupont de Nemours Company. Seyfried joined Perkin-Elmer in 1985. From 1990 to 1993, she served as business director for Biotechnology Instrument Systems. In 1994, she was appointed director of Corporate Business Development and Strategic Planning. She was responsible for managing the development, commercialization, and marketing of the PCR business as part of the Perkin-Elmer Cetus Joint Venture, and the subsequent strategic alliance with Hoffman-LaRoche. She was also instrumental in the Perkin-Elmer Applied Biosystems merger.
After receiving his B.S. from Bates College in 1972 and his Ph.D. from Purdue University in 1976, John J. Sninsky accepted a postdoctoral fellowship from the Departments of Genetics and Medicine at the Stanford University School of Medicine. In 1981, he accepted an assistant professorship at the Albert Einstein College of Medicine. He joined the Cetus Corporation in 1984 as a senior scientist in the Department of Microbial Genetics. In 1985, he was appointed director of the Diagnostics Program and of the Department of Infectious Diseases. In 1988, he was promoted to senior director of both of those departments. Sninsky transferred to Roche Molecular Systems in 1991 to serve as senior director for research. Robert Watson, who joined Cetus in 1977, is currently functioning as a research investigator with Roche Molecular Systems, working on nucleic acid-based diagnostics.
Thomas J. White graduated from John Hopkins University in 1967 with a B.A. in Chemistry. After serving for four years as a Peace Corps volunteer in Liberia, he received his Ph.D. in Biochemistry from the University of California, Berkeley in 1976. In 1978, he joined the Cetus Corporation as a scientist, and was promoted to director of Molecular and Biological Research and associate director of Research and Development in 1981. He was appointed vice president of Research in 1984. He transferred to Roche Diagnostics Research in 1989 to serve as senior director and in 1991 was appointed vice president of Research and Development of Roche Molecular Systems and associate vice president of Hoffman-LaRoche, Incorporated. Joseph Widunas, who graduated from the University of Illinois with a degree in engineering in 1975, came to Cetus in 1981 as a sound engineer. Now director of new product development for Colestech Corporation, Hayward, California, he was instrumental in the development of the second Mr. Cycle prototype, "Son of Mr. Cycle."
Timothy M. Woudenberg received his B.S. in Chemistry from Purdue University in 1980. He worked as an electronics design engineer for Mulab Incorporated from 1980 to 1982. He served as a teaching and research assistant at Tufts University from 1982 to 1987 and there completed his Ph.D. in Physical Chemistry in 1988. He joined Perkin-Elmer in 1987 as an engineer in the Instrument Division of the Biotechnology Department.
Also interviewed were Perkin-Elmer's Robert P. Regusa, biotechnology systems engineering manager for the biotechnology group responsible for the development of the thermocycler instrumentation and Robert L. Grossman, an engineer at Perkin-Elmer, involved with the design and manufacture of the thermocycler line, Senior Marketing Specialist Leslie S. Kelley, as well as Cetus' Senior Scientist, Richard Respess.
Several participants were also interviewed on audiotape. The audiotapes and transcripts complement the videotape sessions and are available through the Division of Medical Science, National Museum of American History.
Session One (May 14-15, 1992), was recorded at Cetus Corporation, Emeryville, California. Kwok, Sninsky, Saiki, Scharf, Leath, Widunas, Jones, Watson, Respess, Erlich, Gelfand, Mullis and Faloona discussed the invention of the PCR technique, early applications, and development of technologies for automating the process, c. 1980-1992, including:
- participants' biographical data;
- application of the PCR technique to the diagnosis and study of HIV and AIDS;
- invention of the PCR technique;
- introduction of the thermostable enzyme Thermus Aquaticus to the PCR technique;
- design and engineering of automated thermocycling machines;
- publicizing the invention of PCR;
- use of PCR for genetics research;
- development of commercial thermocycling instruments;
- Cetus' work environment; sale of PCR patent to Hoffman-LaRoche;
- use of PCR in forensics;
- patenting PCR.
Visual documentation included:
- operations of Mr. Cycle, the first generation cycling machine;
- peltier device;
- demonstrations of the second- and third-generation thermocycling machines;
- Perkin-Elmer TC 4800 and 9600 thermocyclers;
- demonstration of the gel electrophoreses process using the TPCR 9600;
- Mullis diagramming the PCR process;
- gel from first successful experiment;
- Mullis' PCR lecture slides;
- Cetus mural.
Original Masters: 16 Beta videotapes
Dubbing Masters: 8 U-Matic videotapes
Reference Copies: 4 VHS videotapes
Transcript: 156 pages
Session Two (September 25, 1992), was recorded at Roche Molecular Systems, Alameda, California. White, Arnheim, Erlich, and Pasahow discussed the invention of PCR, patent rights, the development of PCR at Cetus, and PCR applications, c. 1980-1992, including:
- participants' biographical data;
- transition of PCR technology from Cetus to Roche Molecular Systems;
- invention and validation of the PCR technique;
- diagnostic applications;
- participants' working relationship with Kary Mullis;
- laboratory culture at Cetus;
- litigation between Cetus and Dupont Company.
Visual documentation included:
- PCR publications;
- exterior of Roche Molecular Systems facility.
Original Masters: 10 Beta videotapes
Dubbing Masters: 5 U-Matic videotapes
Reference Copies: 3 VHS videotapes
Transcript: 96 pages
Session Three (February 25, 1993), was recorded at Perkin-Elmer Corporation, Norwalk, Connecticut. Picozza, Haff, DiCesare, Katz, Seyfried, Barrett, Atwood, Pigliucci, Regusa, Grossman, and Woudenberg discussed the joint venture with Cetus, the design and engineering of commercial thermocyclers, marketing, and future applications, c. 1980-1993, including:
- participants' biographical data;
- thermocycler development to the current 9600 model;
- military and forensic use of PCR;
- current research on new developments in PCR including high performance chromatography and electrochemiluminescence;
- hardware and software development for PCR instruments;
- design and machinery of PCR instrumentation.
Visual documentation included:
- tour of Biotech Engineering and Biotech Chemistry laboratories;
- shipping facility and distribution warehouse;
- various PCR advertising campaigns, Perkin-Elmer newsletter, and PCR Journal;
- demonstration of the parts of the TC Model 4800 and 9600;
- tour of the manufacturing areas including machining center, printed circuit assembly area, electronic test area, sheet metal area, and paint area.
Original Masters: 12 Beta videotapes
Dubbing Masters: 6 U-Matic videotapes
Reference Copies: 4 VHS videotapes
Transcript: 94 pages |
If you have worked through the notes on slope, distance and midpoint, you already know a lot about the algebra of lines. Now we can wrap it up and figure out how to represent a line on a plane with an equation.
That's what it is all about in mathematics: We try to model things using equations (and another kind of mathematical construct you'll learn about later, "functions"). Getting your knowledge of lines and their equations squared away early will really help you to understand much of the math yet to come.
The equation of a line expresses the relationship of each x to its corresponding y to give an ordered pair (x, y) that lies on the line. Ordered pairs (x, y) that don't lie on the line will not satisfy the equation.
← Shown here is part of a line represented by the equation y = x + 4. That equation perfectly pairs one y with every x. Only ordered pairs (x, y) that lie on the line will "satisfy" the equation — make it true. Any point not on the line will make the equation false.
For example, when the x and y-coordinates of the point (7,11) are substituted for x and y in the equation, the equal sign is a true statement: 11 = 11, obviously true.
But when the x and y-coordinates of the point (3, 3), which visibly does not lie on the line, are put into the equation, the result is 3 = 7, which is not true.
Our job will be to figure out how to come up with the equation of any line, like the y = x + 4 at left, using just two pieces of information. The first question is: Why two pieces of information?
A line in the x-y plane (or any plane) is fully described if we know how much it is tilted from horizontal or vertical, and where it is located.
We specfiy tilt by measuring slope, which we defined as the change in the y coordinate divided by the change in x:
We could easily define slope as Δx / Δy, but the way we do it is a long-standing convention. A positive slope means the line runs from lower left to upper right; a negative slope means the opposite.
All that remains is to specify where the line is on the plane. That turns out to be pretty easy; as we move the line up and down (±y), it simultaneously moves left and right (±x). That means we only need to have the location of a single point to specify the location.
Again by long-standing convention, we usually pick the place where the line intersects the y-axis, the y-intercept, where x = 0.
So we generally find the equation that represents a line by finding its slope and its y-intercept. The two mainly-used forms of any equation that models a line are shown below. The slope is as defined above and the y-intercept, b, is the point (0, b).
It's not too difficult at all to find correspondences between these two forms of the equation of a line. After all, they represent the same thing. Take the standard form, for example:
We can begin to morph this equation into slope-intercept form by moving the x-term to the right by subtraction:
Then if we divide both sides by A, we get
Now it's pretty easy to see the correspondences between the standard and slope-intercept forms.
The slope is given by -A/B (magenta) and the intercept by C/B (green).
I wouldn't go to the trouble of memorizing these correspondences, however. In practice, the algebra steps to convert between forms are so easy that you just won't have to.
Here are our two points. They could be any two points on the x-y plane; only one line can be drawn through any two points.
There are two ways to find the equation. Different people have different opinions about which is easier. In the first, start by finding the slope:
Now we need to find the y-intercept, (b). To do that, rearrange the slope-intercept formula by subtracting mx from both sides:
Now we use our slope and just one of the points. Either will work, but I'm going to choose one with positive components. That's always a good choice – less possibility of silly errors.
So the intercept is (note the I've converted 7 to 5ths to add the fractions):
Then the formula, in slope-intercept form, is
That's fine, but it's possible to manipulate it so that it has no fractions, just by multiplying both sides by 5:
... and in nice standard form (x and y both on the left, constant on the right), it's:
The other way to find linear equations from two points is to rearrange the slope formula:
If we multiply both sides by (x2 - x1), we get what is called the "point-slope formula."
Into this we plug one of the coordinates and the slope (still need that!)
Then it's just a matter of multiplying through by the slope (remember to distribute) and solving for y
From here you can convince yourself that we've arrived at the same equation (just convert 7 to 5ths and keep going).
Which method you use is your call.
Let's say that our line passes through the point (-2, 3) and has a slope of m = 7/3. This problem is really just like the one in example 1, except that we can skip the step of calculating the slope because we already know it.
We again have two ways to find the equation of this line. The first is just to use the slope-intercept formula to find the y-intercept:
We use the coordinates of the known point (-2, 3) and the slope to find the y-intercept:
Watch the negative signs! If we convert 3 to 3rds, we get
Now we just plug the slope and intercept into the slope-intercept formula, y = mx + b:
We can always reduce the fractions, in this case by multiplying both sides of the equation by 3:
... and we can convert it to standard form by moving the variables to the left:
The second method (point-slope formula) will give the same result. Start here:
Now plug in (x1, y1) = (-2, 3), and m = 7/3 :
Multiply through the the slope,
and reduce to get the same equation:
Parallel lines have the same slope. Any two lines with different slope will eventually intersect, and therefore can't be parallel.
Problem: Find the equation of a line passing through (1, 2) and parallel to the line passing through points (-1, -1) and (2, 5).
Our first task is to find the slope of the line for which we have two points:
Because parallel lines have the same slope, the slope of our new line is m = 2. We can plug that slope and our point (1, 2) into the point-slope formula:
A little rearrangement gives us
... and finally a very simple equation for our function:
Both lines are plotted on the graph below. I'll leave it to you to find the equation of the first graph; the result is y = 2x + 1. You can see that these two equations don't differ in slope (m = 2), but one line (y = 2x + 1) is shifted upward by 1 unit on the graph.
Two lines that are perpendicular in the same plane have reciprocal slopes of opposite sign. If the slope of a line is m, then the slope of any line perpendicular to it is -1/m
Problem: Find the equation of a line that passes through the point (-1, 4) and is perpendicular to the line passing through points (2, 7) and (-1, -3).
Here again (see example 3) we first calculate the slope of the line for which we have enough information:
Now the slope of a line perpendicular to that line is
Now we can plug that slope and the one point we know that lies along our new line, (-1, 4), into the point-slope formula:
Remember that the point-slope formula isn't a magic trick; it's just a rearrangement of the definition of the slope of a line.
That give us
multiplying through and moving the 4 to the right by addition (note that I've converted it to 40/10 so I have a common denominator), we get
The slope-intercept form of the equation of our new line is
If we multiply through by 10 and move the x to the left by addition, we get a nicer standard form of that equation:
I'll leave it to you to find the standard form of the equation of the other line. It is:
Now if we plot both lines on the same graph, it's easy to see that they are indeed perpendicular.
1. Find the equation of the line passing through these pairs of points:
(a) (-2, -2) and (6, 4)
(b) (-3, 7) and (4, 2)
(c) (1, 5) and (5, 1)
(d) (-8, -1) and (1, 5)
(e) (4, 4) and (-6, 3)
(f) (22, 4) and (-30, 18)
2. Find an equation for the lines passing through the point given and with the specified slope, m.
(a) (6, 1), m = -2
(b) (-3, 6), m = 5/9
(c) (-5, 5), m = 1/3
(d) (-1, -4), m = 1/4
(e) (2, -5), m = -9/2
(f) (18, -4), m = 0.4
Take a look at the two lines in the figure to the left. The x-y plane is meant to be in the plane of your screen, and the y-z plane is sticking out of it (there's also an x-z plane I haven't shown).
The black line is in the y-z plane, and the magenta line is in the x-y plane. They do not cross, and never do, yet they are not parallel.
We have to be careful about our understanding of geometry problems.
xaktly.com by Dr. Jeff Cruzan is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. © 2016, Jeff Cruzan. All text and images on this website not specifically attributed to another source were created by me and I reserve all rights as to their use. Any opinions expressed on this website are entirely mine, and do not necessarily reflect the views of any of my employers. Please feel free to send any questions or comments to firstname.lastname@example.org. |
In addition to the requirement that source rock exists for the generation of hydrocarbons, and that reservoir rock exists for the storage and production of the generated hydrocarbons, traps must also exist to trap, or seal, the hydrocarbon in place forming a hydrocarbon reservoir.
The fluids of the subsurface migrate according to density. As previously discussed, the dominant fluids present or potentially present are hydrocarbon gas, hydrocarbon liquid, and saltwater. Since the hydrocarbons are less dense than the saltwater, they will tend to migrate upward to the surface, displacing the heavier water down elevation. These fluids will continue to migrate until they encounter impermeable rock, which will serve as a reservoir “seal” or “trap.” These impermeable rocks serving as reservoir seals, of which shale’s are among the most common, are referred to as confining beds or cap rocks. Traps exist because of variations in characteristics of rocks of the subsurface. If impermeable rock does not exist, the hydrocarbons will migrate to the surface and dissipate into the environment. In order for a hydrocarbon reservoir to exist, a proper sequence of events must have occurred in geologic time.
Traps can be classified as:
is a shifting or alteration in the horizontal formations of the earth's crust. The alteration is caused by the physical processes of plate tectonics, continental drift, earthquakes, rifting or the intrusion of salt, shale or serpentine. The intrusion forms faults and folds in the original horizontal formations thus creating the traps necessary for reservoirs Other structures common to hydrocarbon reservoirs are folds and faults
type of Structural Traps:
1) Anticline Traps:
Sedimentary beds are generally deposited in horizontal parallel planes over a geographic region, so that many of these sediments will be of essentially uniform thickness over This trap may exist as a simple fold or as an anticlinal dome.that region. If geologic activity should occur, resulting in the folding of these sediments, the result may be the formation of hydrocarbon reservoirs in anticlinal traps. Two major potential advantages of the anticlinal trap reservoir are the simplicity of the geology and the potential size of the trap and therefore of the hydrocarbon accumulation. The high part of the fold is the anticline, and the low part of the fold is the syncline. Since the hydrocarbons are the less dense of the subsurface fluids, they will tend to migrate to the high part of the fold. Consider the hydrocarbon reservoir illustrated in Figure 18. Hydrocarbon reservoir rock,where shale is the cap rock formation of this hydrocarbon reservoir. Sedimentary beds are deposited in a water environment, as indicated by the presence of limestone’s and shale’s. During or after lithification, geologic activity causes folding of the sediments. After folding and lithification, the sandstone has a 100% connate water saturation. Millions of years later, hydrocarbon generated in source rock down elevation from this anticlinal fold is forced from its source rock into the water-saturated, permeable sandstone. Since hydrocarbon is less dense than the water, it begins to migrate up elevation, displacing the heavier water down elevation. As it migrates upward, pressure decreases. At some point in this migration, the reservoir fluid pressure might equal the bubble point pressure of the original hydrocarbon combination. From this point upward, gas is being released from the hydrocarbon. Since the gas is so much less dense than the oil or the water, it will migrate more rapidly toward the top of the anticlinal trap. This process of migration and fluid separation according to density may continue over millions of years in geologic time, until finally, equilibrium is achieved as the hydrocarbon fluids accumulate within the trap formed by the impermeable shale cap rock When this condition of equilibrium is finally achieved, there will be a gas zone (gas cap) on top of an oil zone and then a water zone beneath the oil zone.
2) Fault Traps :
Fault implies fracturing of rock and relative motion across the fracture surface. Consider a possible sequence of geologic events that, in geologic time, . Sedimentary beds are deposited in a water environment, as indicated by the presence of shale’s and limestone’s. During or after lithification,geologic events result in uplift of these original horizontal sediments, and fracturing and tilting above sea level, so that the surface rocks are exposed to erosion. During uplift, the rocks are fractured and slippage occurs along the fault plane.This brings the shale across the fault so that it seals the tilted sandstone below the fault. Millions of years later, hydrocarbon generated in its source rock down elevation from the fault is forced into the connate water-saturated sandstone. Since the hydrocarbon is less dense than the water, it will migrate up elevation, displacing the heavier water down elevation. This upward migration will continue until it reaches the fault and is trapped by the impermeable shale. If the faulting had not occurred, the hydrocarbon would have continued to migrate upward until it was dissipated at the surface into the environment. Since faulting occurred, the shale provides the necessary seal, resulting in the existence of the hydrocarbon reservoir.Notice that, in this example, if slippage had occurred to a greater extent, there would have been flow into the permeable sandstone above the fault. The hydrocarbon would have been lost to the surface, and no reservoir would have been formed.This situation illustrates the significance of geologic probability.
What is the probability that the relative motion across the fault would have resulted in a reservoir seal being formed?Geologic events must occur in the proper sequence, resulting in the proper geologic conditions for a reservoir to exist. The North Sea hydrocarbon environment is an excellent example of the
significance of this geologic probability. Of the hydrocarbon generated in the source rock of the North Sea, it is estimated that less than 10% was trapped. Over 90% of the hydrocarbon was lost back to the surface in geologic time and dissipated into the environment because traps were not present. Fault traps leading to the presence of hydrocarbon reservoirs are often difficult to define because of the complexity of the geology.
3) Salt Dome Traps:
Consider the salt dome geologic system illustrated in Figure and a possible sequence of geologic events that could lead to the formation of this salt dome environment. A major portion of a continental plate was below sea level at a point in geologic history. Due to geologic events, this region rose above sea level, trapping inland a salt water sea. As geologic time passed,the climate changed to a desert environment. This event could have resulted from movement of the continental plate near tothe equator. In this arid desert environment, water evaporated from the salt water sea, leaving the salt residue on the dry seabed. As millions of years passed in the desert environment,sand blew over the salt to cover and protect the salt sediment.Later geologic events resulted in the sinking of the region below sea level, followed by tens of millions of years of sedimentation in the resulting water environment. As time passed, lithification occurred. The desert sand became sandstone, and the salt became rock salt (sedimentary salt).
After lithification, this salt bed was impermeable. It also had two properties significantly different from typical shale, sandstone or limestone:
• It was less dense, with a measurably smaller specific weight.
• At subsurface overburden pressures and subsurface
temperatures, the rock salt was a plastic solid (it was highly deformable).
The combination of this lesser density and plasticity resulted in a buoyant effect if flow possibilities existed. Geologic events caused fracturing of overlying confining rocks. The salt, forced upward by the overburden pressures, began to flow plastically back to the surface, intruding into the overlying rock structures
to lift, deform, and fracture them. The intruding salt was solid,yet geologically deformable. It might intrude at an average rate of only 1 inch per 100 years, yet on a geologic time basis, such deformation is highly significant. This rate would result in 10inches in 1,000 years, or 10,000 inches (833 ft) in 1 million years. In a geologic time period of only 10 million years, this salt dome could intrude to a height of over 1.5 mileoverlying structures. Obviously, a vertical subsurface structure1.5 miles high is geologically significant. Since the salt isimpermeable, the region around the perimeter of the salt domeis an ideal geologic environment for hydrocarbon traps. The tendency of the intruding salt to uplift the rocks as it intrudesenhances the separation of the less dense oil from the more dense salt water by reducing the area of the oil-water contact.The fracturing of surrounding rocks due to the intruding salt and the lifting of the rocks above the salt dome also provide an environment for the existence of fault traps and anticlinal traps in addition to the salt dome traps around the perimeter of the
dome. A salt dome region, therefore, is an excellent geologic environment for all three types of traps discussed so far.An excellent example of a salt dome trap is Spindle top near Beaumont, Texas. The first major discovery and resultant initial oil boom at Spindle top occurred in 1901. Through the 1890s Patillo Higgins had promoted drilling for oil outside Beaumont.He concluded that it was an excellent geologic environment for hydrocarbon reservoirs, because he noted a location near Beaumont where the surface elevation was 15 ft higher than the surrounding land. This rise was a circle approximately 1 mile in diameter. He concluded that this indicated high points in the underlying geology. In 1901, Captain Anthony Lucas drilled a wildcat well at this location, resulting in the Spindle top discovery. Future drilling confirmed that this reservoir existed as an anticlinal dome trap, with the dome created by the uplift of overlying rocks by an intruding salt dome in the subsurface,creating the surface indication of what the subsurface geology might be.The second oil boom at Spindle top began in the mid-1920s.
When further wells were drilled, it was discovered that fault trap and salt dome trap reservoirs existed around the circumference of the salt dome. The drilling pattern for the wells drilled during this later activity was almost a perfect circle as these circumferential reservoirs were developed.
The stratigraphic trap is a change in the lithology of the rock sequence. This change is caused by erosional forces or changes in rock type within a limited areal extent. An unconformity is an erosional feature where a portion of the geological sequence is eroded and an impermeable rock is deposited on top of a porous formation. The process of erosion will enhance or create the porosity and permeability necessary for the existence of a petroleum reservoir. Other stratigraphic trapping include channel sand deposits surrounded by shale, growth of limestone reefs and the formation of barrier islands or sand bars along the ancient shoreline.
Classification of stratigraphic trap:
-Primary Stratigraphic Traps:
These traps result from deposition of elastic or chemical materials. Shoestring sands, lenses, sand -----es, bars, channel fillings, facies changes, strand-line (shoreline) deposits, coquinas, and weathered or reworked igneous materials are classified as elastic sedimentary deposits and can result in stratigraphic traps. An ancient sand-filled stream channel meander has cut into older south-dipping shales and created a perfect stratigraphic trap.
The shale plug served as the seal for reservoirs within a west-plunging structural nose. Hydrocarbons are trapped in the truncated up dip portions of the reservoirs. Organic reefs or biohenns and biostromes are the primary chemical stratigmphic traps; they are built by organisms and are foreign bodies to the surrounding deposits .The Strawn and Cisco-Canyon series are limestone reefs that have had younger
the seal. Differential compaction of the thicker shales on the Type of stratigraphic trap : flanks of the reef as compared with the thinner shale at the crest has created structural closure in younger overlying formations. Hydrocarbon accumulations have occurred in the Cisco and Fuller formations as a result of this differential compaction. Additional traps in other reservoirs arc the result of up dip permeability and porosity barriers and are either primary or secondary stratigraphic traps.
Secondary Stratigraphic Traps:
Traps of this type were formed after the deposition of the reservoir rock by erosion and/or alteration of a portion of the reservoir rock through solution or chemical replacement. Secondary tratigmphic traps actually should fall into the combination-trap classification because most are associated with or are the result of structural relief during some stage of development of porosity and permeability or limitation of the reservoir rock. However, many of the so-called typical “stratigraphic traps” fall into this category, and it is felt that it would be impossible tochange the historical usage of this term. Therefore, secondary stratigraphic traps are defined for this discussion as those traps created after deposition and having limitations caused by lithology changes.
Erosion creates a major part of these through truncation of the reservoir rock. On-lap deposition (when the water is encroaching landward), off-lap deposition (when the water is regressing), and the chemical alteration of limestone result in many secondary stratigraphic traps) It is a truncation of the Woodbine formation as it approaches the regional Sabine uplift. A certain amount of leaching of the cementing material by waters over the unconformity has resulted in increased porosity and permeability in the field as compared with similar Woodbine sands in the deeper portions of the East Texas basin. |
Testing and Revising the Drake Equation
Armed with an estimate of the number of communicative civilizations in our galaxy, SETI scientists set out to find them. They had two basic options: face-to-face communication or long-distance communication. The former scenario required that extraterrestrials visit humans or vice versa. This seemed highly unlikely given the distances between our solar system and other stars in the Milky Way. The latter scenario involved radio broadcasts, either sending or receiving electromagnetic signals through space.
In 1974, astronomers intentionally transmitted a 210-byte message from the Arecibo Observatory in Puerto Rico in the hopes of signaling a civilization in the globular star cluster M13. The message contained fundamental information about humans and our corner of the universe, such as the atomic numbers of key elements and the chemical structure of DNA. But this sort of active communication has been rare. Astronomers mostly rely on passive communication -- listening for transmissions sent by alien civilizations.
A radio telescope is the tool of choice for such listening experiments because it's designed to detect longer-wavelength energy that optical telescopes can't see. In radio astronomy, a giant dish is pointed to a nearby, sunlike star and tuned to the microwave region of the electromagnetic spectrum. The microwave frequency band, between 1,000 megahertz and 3,000 megahertz (MHz), is ideal because it's less contaminated with unwanted noise. It also contains an emission line -- 1,420 MHz -- that astronomers can hear as a persistent hiss across the galaxy. This narrow line corresponds to energy transformations taking place in neutral hydrogen. As a primordial element of the universe, hydrogen should be known to all intergalactic civilizations, making it an ideal marker. Several teams from around the world have been systematically listening to stars across the Milky Way and adjacent galaxies since 1960.
Despite their collective efforts, no SETI search has received a confirmed, extraterrestrial signal. Our telescopes have picked up a few unexplained and intriguing signals, such as the so-called "Wow" signal detected by researchers at Ohio State University in 1977, but no transmission has been repeated in such a way that it provides indisputable evidence of extraterrestrial life. All of which brings us back to the Fermi Paradox: If thousands of civilizations in the Milky Way galaxy, why haven't we detected them?
Since Drake and Sagan made their estimates, astronomers have become more conservative. Paul Horowitz, who boldly guaranteed the existence of extraterrestrial life, has generated more modest results from the Drake Equation, finding that N may be closer to 1,000 civilizations [source: Crawford]. But even that figure may be too large.
In 2002, Skeptic magazine publisher Michael Shermer argued that astronomers weren't being critical enough in their evaluation of L, the length of time a civilization remains detectable. Looking at 60 civilizations that have existed on Earth since the dawn of humanity, Shermer came up with a value for L that ranged from 304.5 years to 420.6 years. If you plug these numbers into the Drake Equation, you find that N equals 2.44 and 3.36, respectively. Tweak the numbers some more, and you can easily get N to fall to one or even lower. Suddenly, the odds of hearing from an extraterrestrial life form are considerably lower.
Even the most enthusiastic SETI supporters are troubled by the lack of results produced by more than 40 years of "listening" to the cosmic airwaves. And yet most of that search has been confined to our home galaxy. Even if there are only three or four civilizations per galaxy, there are billions and billions of galaxies. This tilts the odds again in favor of finding extraterrestrial life, which is why many SETI astronomers take the same approach to their work as lottery players: You can't win if you don't play. |
1717 First levee built by Europeans along the Mississippi River
Sieur Leblond de la Tour, the French engineer who designed New Orleans , constructed the first levee along the Mississippi River . Upon completion, the levee was 3 feet high, 5400 feet long, and 18 feet wide at the top. It doubled as a roadway.
1735 Extension of the levee system
The extension of levee line kept pace with the establishment and growth of settlements. Each planter was required to complete the levee along his own property front. By 1735, the levee line extended along both banks of the river for a distance from 30 miles above New Orleans to 12 miles below.
1743 French hasten development of levee system
A French ordinance required all inhabitants of the valley to complete their levees by January 1, 1744 under the penalty of forfeiture of their lands to the French Crown.
1775 First Chief of Engineers Named
Congress organized the Continental Army on June 16, 1775 , and provided for a Chief Engineer and two assistants. Colonel Richard Gridley of Massachusetts , one of the few colonials with experience in the design and construction of batteries and fortifications, became General George Washington's first Chief Engineer. As Chief Engineer, Colonel Gridley's first task was to build fortifications near Boston at Bunker Hill.
1802 Congress created the modern Army Corps of Engineers
With the creation of the U.S. Army Corps of Engineers, the nation committed itself to a century-old French tradition of public works in which the army guided construction under the auspices of a rational, centralized state.
1803 Louisiana Purchase
Napoleon Bonaparte negotiated the sale of the Louisiana territory with American negotiators James Monroe and Robert Livingston. The new American government sought to facilitate trade and to develop the region's rich economic potential. With the extension of American control, the floodgates were thrown open to frontiersmen eager to settle the fertile lands of the Mississippi Alluvial Valley , and the population of that region grew dramatically.
1811 The Arrival of the Steamboat
The arrival of the first steamboat, the New Orleans , on the lower Mississippi River heralded a commercial revolution that transformed the Mississippi Valley and ushered in a golden age for the city of New Orleans . A little more than a decade later, 75 steamboats worked the Mississippi River Valley ; by mid century, there were 187.
1812 Levee lines extended
By 1812, when Louisiana was admitted into the Union , the levee line extended from the lowest settlements to Baton Rouge on the left bank, and to Pointe Coupee on the right bank.
1822 Bernard and Totten Report
Army engineers Simon Bernard and Joseph G. Totten present the first official U.S. survey of the Mississippi River . Concerned primarily with the improvement of navigation, the study stressed the value of levees in promoting commerce.
1824 Turning Point in Federal Involvement
The U.S. Supreme Court ruled in the Gibbons v Ogden decision that, under the “commerce clause” of the U.S. Constitution, the federal government had the power to regulate river navigation “so far as that navigation may be in any manner connected with commerce.” Thus empowered, the federal legislature quickly passed the General Survey Act, which set a precedent for appropriations for internal improvements on a national scale, and the first rivers and harbors legislation, which contained an appropriation of $75,000 to improve navigation on the Ohio and Mississippi rivers. Yet, the authorities under which Congress passed the unprecedented bills did not grant the prerogative to finance flood-control works. Such an endeavor remained a function of the individual states.
1826 Congress passed the Rivers and Harbors Act of 1826
Although the 1824 act was considered the first rivers and harbors legislation, the Rivers and Harbors Act of 1826 was the first law to combine authorization for both surveys (General Survey Act) and projects (RHA of 1824), thereby establishing a pattern that continues today.
1828 Mississippi River flood
The flood of 1828 is generally believed to be the greatest flood of the nineteenth century.
1831 Shreve's Cutoff
Henry Shreve executed an artificial cutoff at Turnbull Bend on the Mississippi River , with the aims of shortening the Mississippi River , eliminating shoaling at the mouth of the Red River , and increasing the volume of flow into the Atchafalaya River . Shreve's lasting actions played havoc on the dynamics of three rivers and their relationships with one another.
1837 Problems at the Passes
While the use of steam greatly increased the size of oceangoing vessels, these larger ships found it more difficult to navigate the bars that choked the Mississippi River 's several outlets to the sea. In 1837, navigators abandoned the badly shoaled Northwest Pass in favor of the deeper Southwest Pass.
1844 Levee lines reach the mouth of the Arkansas River
Continuous levee lines extended 20 miles below New Orleans to the mouth of the Arkansas River on the right bank and to a point opposite Baton Rouge on the left bank. In addition many isolated levees extended along the lower part of the Yazoo front.
1849-1850 Swamp Acts
Congressional members from Louisiana led a fight to secure the transfer of swamp lands of the states along the Mississippi Valley , culminating in the Swamp Land Grants of 1849 and 1850. Revenue raised from the sale of those lands paid for further levee construction and encouraged the organization of levee districts throughout the lower valley. The acts represented the first step toward the federalization of flood control on the Mississippi River , but the onus of flood protection remained on the shoulders of local governments.
1852 Ellet Report
Charles Ellet, a civil engineer working for the Corps of Engineers, completed a topographical and hydrographical survey of the delta of the Mississippi River . His report to Congress advocated greater federal responsibility for the control of floods in the lower Mississippi River and favored a comprehensive plan for controlling floods--a plan which included, in addition to levees, the construction of headwater reservoirs, the enlargement of existing outlets, and the creation of an artificial outlet.
1852 Latimer Board
Conditions at the Mississippi River 's outlets continued to deteriorate as 40 oceangoing vessels ran aground sandbars at the Southwest Pass , causing delays of up to 8 weeks. In response, the Secretary of War appointed an advisory board under the command of Navy Captain W.K. Latimer to study riddle of the passes. In addition to Latimer, three army engineers comprised the board, the most notable being Captain John Gross Barnard. The board recommends dredging at the passes and, should those efforts fail, advocated the construction of a jetty system. As a last resort, the board recommends the construction of a ship canal from Fort St. Philip to the Gulf.
1858 Highest pre-Civil War stage of levee development reached
Levees extended in an intermittent line of the west bank from Commerce Hills to Pointe-a-la-Hatchie. On the east bank, the levees protected the Yazoo basin and extended from Baton Rouge to Pointe-a-la-Hatchie. The levees were deficient in height and cross section and the 1858 flood caused 32 crevasses. The levee system of 1858 marked the highest stage of lower Mississippi Valley development and set a standard that could not be successfully maintained.
1861 Publication of the Delta Survey
After more than ten years of exhaustive research, A. A. Humphreys and Henry L. Abbot completed the Report Upon the Physics and Hydraulics of the Mississippi River , commonly referred to as the Delta Survey . The study represented the most thorough analyses of the Mississippi River ever completed and won the respect of engineers around the world. Both in terms of data gathered and the conclusions rendered, influenced the development of flood control policy well into the twentieth century.
1861-1865 U.S. Civil War
Necessarily preoccupied over the next four years and beyond, the people of the lower Mississippi Valley abandoned their flood control efforts altogether, and, very quickly, the levees began to deteriorate. The general neglect of the levee system throughout the war years resulted in untold damage to the system, as whole sections fell into disrepair and were washed away by the river. A major flood in 1862 hurried this process. The levees sustained further damage as a result of military operations in 1863 and 1864. By the end of the war, the neglected levee system was in shambles.
1867 Construction of the Eads Bridge began
James B. Eads, an internationally recognized civil engineer, began the constructing a bridge across the Mississippi River at St. Louis .
1870 Pennsylvania v Wheeling Bridge Co
The U.S. Supreme Court ruled that the right of Congress to regulate navigable waterways included the right to make improvements. This ruling merely confirmed what congress and private interests had long taken for granted.
1873 The Fort St. Philip Canal
A decline in river commerce at New Orleans prompted Congress to direct the Chief of Engineers, A.A. Humphreys, to develop plans and estimates for a ship canal. Humphreys directed Major Charles Howell to make the study and Howell concluded that the Fort St. Philip was feasible at a cost of $7.4 million. President Grant appointed a board of engineers to study Howell's report. Six of the seven board members supported Howell's conclusions, but the board president, Colonel John Gross Barnard—a member of the Latimer board and proponent of a jetty system, filed a minority report, jeopardizing the passage of a ship canal bill.
1873 The Eads and Humphreys clash began
In May, Eads condemned the Fort St. Philip ship canal proposal and instead advocated the use of jetties to deepen the passes. Five months later, the secretary of war instructed Humphreys to organize an engineer board to investigate the impacts of the Eads Bridge on steamboat traffic. The board quickly concluded that the bridge lacked adequate clearance for steamboats and recommended that Eads construct a canal bypassing the bridge at his own expense. Eads successfully appealed to President Grant to overturn the board's recommendation. These incidents sparked a long-running public feud between Eads and Humphreys.
1874 The Eads jetty proposal
While Humphreys continued his push for a Fort St. Philip ship canal bill, Eads goes before Congress to lobby for his jetty proposal. Under the terms of his proposal, Eads' jetties would maintain a depth of 28 feet at the Southwest Pass for $10 million. Eads also offered an extraordinary inducement—Congress would pay nothing if his jetty project failed to reach the required depth. In response, President Grant appointed a board of engineers to study the jetty and the ship canal proposals. The board voted 6-1 in favor of Eads jetty proposal, but for the South Pass rather than the Southwest Pass as Eads had desired. On March 3, 1875 Grants signed the Eads jetty bill into law over the strenuous objection of Humphreys.
1874 Flood and Warren Commission Report
A great flood in 1874 exploited the still weakened levees system and wrecked havoc on the lower valley. The resulting suffering and devastation forced the federal government to redirect its attention to the flood problems of the delta. That year, the U.S. Congress approved an act creating a commission of engineers "to investigate and report a permanent plan for the reclamation of the alluvial basin of the Mississippi River subject to inundation." To that end, President Grant appointed General G. K. Warren as commission chairman and appropriated $25,000 for the study. After considerable analysis of the flood problem in the delta, the Warren Commission criticized the efforts and methods of local flood control and emphasized the need for greater federal commitment to the control of the Mississippi River . The report's solid recommendation for greater federal commitment stimulated the growth of favorable public sentiment and encouraged flood control advocates in Congress.
1875 House Standing Committee on Mississippi Levees
Led by Louisiana Congressman Randall Lee Gibson, flood control advocates convinced House Speaker Michael C. Kerr of Indiana to authorize the creation of a House standing committee on Mississippi levees. Beginning with its inception on December 10, 1875 , this committee became the battering-ram for flood control interests in Congress and remained so for more than thirty-five years. The creation of the Mississippi River Commission was among the committee's most significant achievements.
1876-1879 Jetty system completed
By February 1876, the Eads jetty system successfully deepened the South Pass to 13 feet. Soundings in October revealed the South Pass had deepened to 20 feet; additional soundings in December indicated depths at 22 feet. By 1879, Eads jetty system reached a central depth of 30 feet at the South Pass. The success of the South Pass jetties shaped the development of river-management policy for the lower Mississippi River . Eads' success proved that--under the right circumstances--jetties could direct the river to scour out and deepen its own channel. Before long, prominent civil and military engineers became convinced that the Mississippi 's own energies could be directed to the task of deepening the channel and improving navigation along the whole length of the river. Eads himself was the leading proponent of this idea. In a speech made at New Orleans , La. , he proposed to "set the river to work in the bottom of its bed, as we did at the jetties, and, while deepening it for the benefit of commerce lower its haughty crest forever." Flood control advocates in Congress seized upon the idea that a properly-constructed levee system could promote navigational improvements and began looking for ways to implement Eads' ideas.
1879 Creation of Mississippi River Commission
Congress established the Mississippi River Commission (MRC) to develop and oversee the implementation of plans to "improve and give safety and ease to navigation" and to "prevent destructive floods" on the Mississippi River. This seven-member executive body consisted of three representatives from the U.S. Army Corps of Engineers, one representative from the Coast and Geodetic Survey, and three civilians, at least two of whom were required to be engineers. All members of the MRC were to be appointed by the President of the United States and confirmed by the Senate. President Rutherford B. Hayes selects army officers Quincy A. Gillmore, Cyrus B. Comstock and Charles Suter as members of the original MRC, along with Henry Mitchell of the Coast and Geodetic Survey and civilians James Eads, Benjamin Harrod, and Benjamin Harrision. Because Congress established the MRC as an executive body, only a simple majority vote was necessary for the passage of any resolution, resulting in compromise, and sometimes inconsistent, policy.The MRC held its first meeting on August, 19, 1879 and agrees to establish its headquarters in St. Louis .
1880 MRC presented its first report to Congress
The MRC met in January to discuss the basic principles for improving the river. The dialogue over the following 10 days was marked by a display of divergent views. All members agreed on a basic plan of improvement based on contracting the channel and protecting riverbanks from erosion, but disagreed on the value of levees as aids to navigation and the necessity of closing all gaps and outlets. In its preliminary report to Congress the MRC advanced a navigation improvement plan based on channel contraction, bank revetment, and the closure of all outlets, with the exception of the Atchafalaya River, the subject of which had been turned over to the MRC Committee on Outlets and Levees for further study. The MRC report also advocated a policy of restraint in the interest of navigation dependent on closing gaps and restoring broken levees to their former height. While the report described the levee system as a desirable, although unnecessary, component of low-water contraction, the report warned that the repaired levees would not be of sufficient height to prevent destructive floods. Two members of the MRC, Comstock and Harrison, filed a minority report and express their disagreement with the majority opinion on the value of closing outlets and gaps in the levee system as aids to low-water navigation.
1881 Congressional Appropriation Bill for the MRC
Despite the MRC request for $5.3 million to fund its efforts for the first year of the plan advanced in 1880, Congress appropriates only $1 million, forcing the MRC to limit its work to two reaches of the river at Plum Point and Lake Providence . This bill also included a provider that prohibited the MRC from funding levees repairs or construction for the purpose of protecting private property from floodwaters. This provider essentially eliminated flood control from the MRC general plan of improvement for the river.
1882 Mississippi River flood and MRC reorganization
Levees protecting the Mississippi Valley failed at 284 different locations, and many were simply overtopped, reflecting the complete inadequacy of the levee system. In response, Congress passed a bill that authorized limited levee construction for the purpose of improving navigation, but not for flood control. Shortly thereafter, the MRC Committee on Outlets and Levees issued its report recommending a continuation and refinement of the policy of restraint in the interest of navigation. The revised levee plan called for a standard levee grade capable of accommodating a comparable discharge of the 1882 flood with a 3-foot safety margin. The committee noted that, while this 1882 grade “would provide the maximum effect in channel improvement at the minimum cost,” the higher levees “would not be of sufficient height to protect adjacent lands from overflow during great floods.” The committee also issued its recommendations for treatment of the Atchafalaya River . Instead of closing the outlet as Eads wanted, the committee proposed to keep the river open by constructing a low-sill brush dam across Old River to check the enlargement of the outlet. As a result of the 1882 Rivers and Harbors Act, the Corps of Engineers became responsible for implementing the Commission's plans. The Commission divided the Mississippi below Cairo , into four administrative districts: the First MRC District at Cairo , the Second MRC District at Memphis , the Third MRC District at Vicksburg , and the Fourth MRC District at New Orleans.
1883 Eads resigned from the MRC
Frustrated in his attempts to have the Atchafalaya River closed as an outlet for the Mississippi River and bitter toward the MRC policy of restraint in the interest of navigation, an increasingly disinterested Eads resigns from the MRC. Shortly thereafter, Eads admonished the Congress and the MRC for focusing their attention solely on navigation and not the protection of alluvial lands from flooding.
1885 First District headquarters moved from Cairo to Memphis
1886 Congress prohibited the MRC from revetment work
Bank revetment work to improve navigation along the channel was prohibited by Congress in 1886, just as technical advances were finally providing effective bank protection. Irregular and inadequate Congressional appropriations and a tendency by the federal legislature to dictate engineering policy had effectively paralyzed the MRC by the end of the decade.
1888 Rivers and Harbors Act of 1888
The act stipulated that the Corps was authorized to require owners of obstructive bridges to modify the bridge, at their own expense and effort, so as to provide for reasonably free and unobstructed navigation.
1890 Flood focused Congressional attention on river problems
Once again, a major flood proved the inadequacy of the Mississippi River levees. Following the flood, Congress appropriated $3.5 million for the MRC. Additionally, for the first time, this bill did not include the standard provider against levee construction for the purpose of controlling floods. This landmark piece of legislation contributed to the rapid expansion of levee construction under the MRC in the first half of the decade. The Rivers and Harbors Act of 1890 was the first general legislation that gave the Corps jurisdiction and authority over the protection of navigable waters. Though the Corps issued permits under Section 7 of the 1890 Act, amended in 1892, the law was found to be crude and clumsy.
1896 MRC abandoned contraction and revetment efforts
Faced with the lingering realities of fiscal and legal constraints of the previous decade, the MRC admitted that its attempts to improve the navigability of the river through bank revetment and contraction works were generally unseccessful. Congress, in turn, authorized the construction of dredges "with the view of ultimately obtaining and maintaining a navigable channel from Cairo down, not less than two hundred and fifty feet in width and nine feet in depth at all periods of the year except when navigation is closed by ice." In response the Mississippi River Commission created an independent dredging district at St. Louis . The district's plant and equipment are later transferred to West Memphis.
1897 Mississippi River flood
This destructive flood forced Congress to reassess the value and direction of its flood control program for the lower Mississippi River.
1898 Nelson Report
A Congressionally-sponsored investigation into alternative flood control methods yielded no change in policy. The subsequent Nelson report advocated the continuation of the federal levee policy for the lower Mississippi River . Additionally, a flood that same year caused no breaks in the levees. For the first time since the commencement of a continuous levee line along the lower Mississippi , a flood reaching the height of fifty feet at Cairo was safely discharged to the Gulf of Mexico without a single break in the levees.
1899 Corps of Engineers regulatory mission expands
The act rewrote the 1890 act and served as a compilation of all laws for protection of navigable waters. Section 10 of the 1899 act gave the Corps the authority to regulate activities that might lead to potential obstructions to navigation.
1902 The Board of Engineers for Rivers and Harbors created
Congress created the board to approve or reject river development projects
1903 Mississippi River flood
The great flood of that year breached the levees. According to the Commission, all crevasses in the line resulted from the "unfinished nature of the levees as regards both grade and section." The push for higher levees continued.
1905 Rivers and Harbors Act
At the recommendation of the Board of Engineers for Rivers and Harbors, the act stipulated that dredging be the primary means of maintaining a navigation channel on the Middle Mississippi River, as was done on the river south of Cairo.
1906 MRC jurisdiction expanded
The 1906 Rivers and Harbors Act expanded the jurisdiction of the MRC by authorizing the construction of levees between the Head of Passes and Cape Girardeau , Mo. , thus extending the Commission's responsibilities for levees above Cairo to the head of the St. Francis Basin.
1907 Inland Waterways Commission created
President Theodore Roosevelt led a burgeoning conservationist movement that set out to establish a barrier of federal regulation and protection for the nation's land and water resources. The IWC was charged with developing a national policy for river regulation and with making recommendations for the improvement of the national system of waterways. Gifford Pinchot and Nevada Senator Francis G. Newlands were among the most prominent members of this commission, and both opposed the MRC levees policy, favoring instead a more varied approach to flood control which would include reservoirs and outlets. Their report, released February 3, 1908, advocated the creation of a permanent commission that would be tasked with coordinating the various federal agencies responsible for regulating the nation's water resources-including the Army Corps of Engineers, the Bureau of Soils, the Forest Service, the Bureau of Corporations, and the Reclamation Service. This coordinating board would consider, among other things, all matters of irrigation, swamp and overflow land reclamation, and flood control. The MRC opposed the creation of this competing agency and, together with its allies in Congress, delayed the creation of a permanent IWC for a full decade.
1907 6-foot channel authorized on the upper Mississippi
The Rivers and Harbors Act of 1907 authorized a 6-foot channel on the Upper Mississippi River . The method used in attempting to achieve a six-foot depth was open river regulation.
1908 Corps of Engineers established the Western Division
The Western Division was established with headquarters in St. Louis , and district offices at St. Paul , Kansas City, St. Louis , Memphis , and Vicksburg . The Western Division had jurisdiction over specific work on the Mississippi River from its headwaters to Baton Rouge.
1909 Attorney General restricts permit authorizations
The U.S. Attorney General issued an opinion ruling that the Corps could not look beyond navigation to deny a permit. The Rivers and Harbors Act of 1909 required consideration of hydroelectric power development in all subsequent river improvement projects.
1910 Portions of the 1905 Rivers and Harbors Act repealed
The Rivers and Harbors Act of 1910 adopted a return to permanent improvement structures over dredging as the primary means of establishing a navigation channel.
1912-1913 Foods threatened the reclamation of the valley
In those years, the Mississippi Valley experienced successive record-breaking floods which precipitated a crisis in the reclamation program. The tremendous expense incurred as a result of the regular inundation of the Valley, combined with the cost of building, maintaining, and repairing the levee system, was becoming prohibitive. Out of self-preservation, landowners in the valley launched a massive propaganda campaign directed at obtaining greater federal commitment.
1913 Townsend Report and the expansion of MRC jurisdiction
Following the 1913 flood, President Woodrow Wilson directed the MRC to submit a report on flood control. This report, authored by the MRC's president, Colonel Curtis McDonald Townsend, considered six methods of flood control-reforestation, reservoirs, cut-offs, outlets, floodways, and levees. As with all previous reports, the Commission condemned the various alternatives to levees and advocated a continuation of policy. The Rivers and Harbor Act of 1913 authorized the MRC to complete a survey and examination of the upper Mississippi River between Cairo and Rock Island , with a view toward building levees for navigational purposes.
1914 MRC considered implementation of spillways
New Orleans interests, alarmed that the 1912 flood reached a record stage of 21 feet on the Carrollton gage and convinced that levee construction and gap closing upstream were to blame, convinced the MRC to study the feasibility of constructing emergency spillways as supplements to the levee system. The MRC report, completed by Major Clarke S. Smith, the MRC Secretary, examined six possible locations, but citing fears of interrupting the continuity of the existing levee line and the threat of backwater flooding, surmised that a suitable location for a spillway could not be found.
1916 MRC levee work on the upper Mississippi
The 1916 Rivers and Harbors Act affirms that congressionally appropriated funds for improving the river between the mouth of the Ohio River and the Head of Passes could be expended “for levees upon any part of said river between Head of Passes and Rock Island .”
1917 Ransdell-Humphreys Act (First Federal Flood Control Act)
This landmark act committed the federal government, for the first time, to flood control for the Mississippi Valley . It also extended the Commission's jurisdiction to include the water-courses connected with the Mississippi River to the extent necessary to exclude flood waters from the upper limits of any delta basins. A year later, the MRC created the Northern MRC District to administer levee operations from the northern boundary of the First District to Rock Island.
1918 The Federal Barge Line
When the railroad industry proved in adequate to meet increased transportation demands during the World War I, river traffic was revived as means to supplement rail. Through the Federal Control Act, Congress created a federal barge line between St. Louis and New Orleans , while spending nearly $8 million to restore the Mississippi River as a great freight-carrying waterway. From 1917 to 1920, the value of river commerce increased from $15 million to $31 million.
1921 Closure of Cypress Creek completed
In 1919, the MRC consented to the Southeast Arkansas Levee District request to close the Cypress Creek Gap. The final closure of the Cypress Creek Gap in 1921 denied the Mississippi River its final natural overflow outlet, with the exception of the Atchafalaya , the fate of which remained unsettled.
1922 Record-breaking Mississippi River flood below White River
The 1922 flood quickly surpassed all previous record stages below the mouth of the White River despite having a discharge considerably less than the floods of 1912 and 1916. Downstream interests attribute the increase in flood stages to the closure of Cypress Creek and resume their demands for an emergency spillway to reduce flood heights at New Orleans , but the MRC continued its defense of the levee system. The flood gives impetus to a movement to establish a national hydraulic laboratory to identify workable solutions to flood problems, rather than the existing practice of hands-on observations. Both the MRC and the Corps of Engineers are unreceptive to the laboratory idea and help to defeat a bill for the establishment for such a laboratory in 1922 and 1924.
1923 Second Flood Control Act
This act provided $60 million for levee construction over a ten-year period for the purpose of completing the levee system along the lower Mississippi River.
1925 The Pointe-a-la-Hatchie Spillway
The MRC consented to the Orleans Levee District request to construct an emergency spillway at Pointe-a-la-Hatchie. The spillway is completed the following year. Also in 1925, a rivers and harbors bill directed the Corps and the Federal Power Commission to jointly survey and submit reports on all navigable streams indicating what multi-purpose water resource development possibilities existed for navigation, hydropower, flood control, and irrigation. The 1925 act served as a landmark in the evolution of the Corps' civil works functions.
1926 MRC declared the levee system complete
The MRC concluded in its annual report that the levee system "is now in condition to prevent the destructive effects of floods." At the same time, Congress passed legislation tasking the Corps of Engineers to complete a study to determine the feasibility of controlling Mississippi River floods below Old River by means of spillways and levees. The Corps of Engineers established a spillway board to conduct the survey. Also of significance in 1926, the Secretary of War submitted House Document No. 308 to Congress, presenting an estimated cost of surveys and reports on nearly 200 rivers.
1927 Mississippi Valley deluged
The "Great Mississippi Flood of 1927" so devastated the valley that Herbert Hoover, then Secretary of Commerce, called it "the greatest peace-time calamity in the history of the country." The flood prompted an overhaul of flood control plans for the lower Mississippi River . Both the MRC and the Corps of Engineers submit comprehensive plans that included levees, floodways, and bank protection. While the plans were nearly identical in many respects, lower valley residents found the MRC plan more acceptable, but Major General Edgar Jadwin, the Chief of Engineers suppressed the MRC report, choosing instead to advance the Corps of Engineers plan bearing his name. Also, the Rivers and Harbors Act of 1927 authorized the surveys called for in the 1925 RHA. The resulting “308 Reports” embodied the first systematic efforts at comprehensive basin development planning; and multipurpose planning became a reality.
1928 Flood Control Act
After much debate, Congress approved "An act for the control of floods on the Mississippi River and its tributaries, and for other purposes." Through this historic 1928 act, Congress instructed the MRC to implement the engineering plan advanced by Jadwin for controlling floods on the lower Mississippi River . The plan adopted by Congress provided for enlarging and strengthening the levees from Cape Girardeau , to the Gulf of Mexico , with the objective of safely discharging up to 1,500,000 cubic feet/second of water within the main channel. In addition, the levee system was to be supplemented by several floodways. The first floodway was designed to protect the territory between the Arkansas and Red Rivers . Referred to as the Boeuf floodway, this diversion would channel up to 1,500,000 cubic feet/second of flood water away from the Mississippi River near Arkansas City , into the Tensas River Basin . The second floodway, known as the Atchafalaya floodway would be utilized to carry up to 1,500,000 cubic feet/second of water from the Red and Mississippi Rivers through the Atchafalaya Basin to the Gulf of Mexico . The plan also called for the construction of a spillway above New Orleans . This spillway, the Bonnet Carré, would empty up to 250,000 cubic feet/second of floodwater into Lake Pontchartrain to the north. Lastly, the plan called for the construction of the New Madrid Floodway. This floodway would divert excess waters from the river near Cairo through a levee-flanked floodway. All floodways, with the exception of Bonnet Carré, were to be governed by fuseplug levees.
1928 Mississippi River Commission districts reorganized
The MRC abolished the Northern District and transferred all work to the Corps of Engineers' Rock Island and St. Louis Districts. The MRC also merged the First, Second, Third, and Fourth with the existing Corps districts at Memphis , Vicksburg , and New Orleans.
1929 Reorganization and Relocation
The Corps of Engineers abolished the Western Division and established the Lower Mississippi Valley Division (LMVD), with headquarters in Vicksburg , and the Upper Mississippi Valley Division (UMVD), with headquarters in St. Louis . The Memphis , Vicksburg , and New Orleans districts comprised LMVD; while the St. Louis , Rock Island , and St. Paul districts comprised UMVD. As a part of the reorganization, the MRC president would also serve as the Division Engineer for LMVD, prompting the MRC to relocate its headquarters from St. Louis to Vicksburg . The Corps also established a hydraulics laboratory, designated the Waterways Experiment Station (WES), at Vicksburg . Because WES was established to assist with the implementation of the newly authorized flood control project for the lower Mississippi , the laboratory is placed under the administrative supervision of the MRC president until 1947.
1930 Locks and Dams authorized for the upper Mississippi
The attempts to secure a six-foot channel by means of open river regulation, as authorized in the RHA of 1907, proved to be ill suited to the common low water conditions on the upper Mississippi River. As a result, Congress passed The Rivers and Harbors Act of 1930, authorizing the 9-Foot Channel Project on the upper Mississippi River . The Board of Engineers final survey report, issued in 1931, called for the construction of 24 new locks and dams and the incorporation of three existing structures into the project. Only 23 new structures were actually constructed as Lock and Dam No. 23 was later eliminated from the plan.
1931 The Jadwin Plan reconsidered
Opposition to the engineering and economic features of the Jadwin Plan led to a congressionally mandated restudy of the Jadwin Plan, To the dismay of critics of the plan, the 1,500-page restudy reaffirms the Jadwin Plan.
1932 MRC initiated cut-off policy
Studies carried out at the newly created Waterways Experiment Station convinced the MRC to initiate a series of cutoffs in the middle reaches of the Mississippi River. Within nine years, sixteen such cutoffs had shortened the distance from Memphis to Vicksburg by 170 miles and reduced flood heights along the main channel considerably. The successful development of these cutoffs marked a new phase in the evolution of flood-control engineering. Under continued pressure from critics of the Jadwin Plan, Congress passed a resolution to request an examination and review of the status and condition of the works then in progress as authorized by the 1928 act, with a view to determining if changes or modifications should be made in relation to the project and its final execution.
1933 Greathouse et al v Dern changes Corps regulatory program
The U.S. Supreme Court ruled that the Corps could refuse a permit for a commercial wharf on the Potomac River that would not harm navigation but would be injurious to the construction of the George Washington Memorial Parkway . The ruling effectively ended the Corps' regulatory program as a navigation only policy and introduced public interest factors.
1935 MRC proposed sweeping changes to the Jadwin Plan
In a report to the chief of Engineers, the MRC proposed sweeping changes to the Jadwin Plan. The MRC proposed eliminating the fuseplug governed Boeuf Floodway in favor of a smaller Eudora Floodway regulated by a controlled spillway; the construction of the controlled Morganza Floodway; tributary improvements and reservoirs in the St. Francis and Yazoo basins; and greater compensation for landowners within the floodways. Many lower valley residents welcome the proposed modifications in clear contrast to the 1931 Corps of Engineers restudy.
1936 Overton Act
The Overton Act modified the 1928 Act in accordance with the MRC recommendations of 1935 to authorize payment for the construction of drainage, levees, highways, highway and railway crossings, and other specified expenditures in connection with the authorized floodways, outlets, and reservoirs. The act also authorized the construction of headwater projects in the Yazoo and St. Francis basins and for the partial protection in the White River backwater area.
1937 Ohio-Mississippi River flood
The flood of 1937, the result of long-continued heavy rains in the basin of the Ohio River , produced a flood of unprecedented magnitude. The New Madrid floodway at Cairo was placed in operation, as was the Bonnet Carré spillway near New Orleans , La. The cutoffs initiated in 1932 along the Mississippi below the mouth of the Arkansas River accelerated discharges and lowered flood heights by as much as five feet.
1938 Flood Control Act
In addition to authorizing $375 million in flood control projects for a variety of river basins, this landmark act reduced requirements for local contribution in the construction of reservoirs and facilitated the construction of headwater projects on many of the major tributaries of the Mississippi River, including the Ohio, the Tennessee, the Cumberland, the Missouri, the Upper Mississippi, the Arkansas, the White, the Red, the St. Francis, and the Yazoo Rivers. These reservoir systems were instrumental in reducing flood levels on the main river, as well as the various tributaries.
1940 World War II promoted economic recovery
The outbreak of hostilities in Europe and the eventual entry of the United States into World War II promoted the recovery of the national economy as well as a substantial increase in river commerce. Unimpeded navigation became essential for military reasons as well, as the nation shipped petroleum and other war-time products. Additionally, almost 4,000 Army and Navy craft and other vessels for use in the war--destroyer escorts, fleet submarines, landing craft, freighters, tankers, and oceangoing tugs--moved from inland shipyards down the Mississippi to the sea.
1941 Plans for the Eudora Floodway abandoned
In the face of heavy opposition, Congress authorized the 1941 Flood Control Act, which abandoned of the Eudora floodway in favor of higher levees. Additionally, Congress expanded its 1936 authorization to include the Yazoo and Red River backwater areas.
1943 Congress inquired into deepening channel below Cairo
The House Flood Control Committee and the Senate Committee on Commerce passed a resolution calling on the Chief of Engineers and the MRC to submit a report on the feasibility of amending the navigation provisions of the 1928 Act, with specific reference to increasing channel depths from nine to twelve feet from Cairo to Baton Rouge. The next year, the MRC concluded that stabilization efforts already underway, together with additional dredging, might be enough to provide a twelve-foot deep channel below Cairo.
1944 Flood Control Act
Based on the MRC report, the 1944 act authorized approximately 150 additional projects throughout the nation at a cost of $750 million; and it authorized a channel depth of twelve feet in the Mississippi River between Cairo and Baton Rouge , as well as a $200 million stabilization program. The act also required all subsequent navigation and flood control projects be subjected to the approval of the affected states. Finally, the 1944 act articulated a new policy for the development of recreation facilities at reservoirs, stipulating that "all such public reservoirs shall be open to public use generally without charge for boating, swimming, bathing, fishing, and other recreational purposes, and ready access to and exit from such water areas... shall be maintained for general public use." This new responsibility represented an important step toward true, multi-purpose development of the nation's water resources. The 1944 act signaled the victory of the multipurpose approach. It empowered the secretary of the interior to sell power produced at Corps and other federal projects. The act also authorized the gigantic multipurpose civil works project for the Missouri Basin commonly called the Pick Sloan Plan. In the ensuing years, the Corps built several huge dams on the main stem of the Missouri River . These dams were all multipurpose. They provided flood control, irrigation, navigation, water supply, hydropower, and recreation. The act also formalized Corps participation in the development of recreation facilities at reservoirs.
1945 Congress authorized Chain of Rocks Canal and Locks 27
The canal was designed to allow river-borne traffic to bypass the treacherous Chain of Rocks Reach, a 17-mile series of rock ledges that increased the velocity of the current and making the stretch extremely dangerous to navigate. Completed in 1953, the new structure and 1,200-foot lock represented the first major addition to the 9-Foot Channel Project. The project served as the final element required to secure a navigable 9-foot channel between St. Paul and New Orleans.
1950 Atchafalaya River study
A major Corps of Engineers study determined that, without interference of some kind, the Atchafalaya would capture the Mississippi River by 1975. To prevent this, the Corps urged Congress to authorized the construction of a controlled connection along the Old River to regulate the volume of water allowed into the Atchafalaya Basin.
1953 Federal Barge Line sold to private investors
River traffic no longer required Federal guarantees.
1954 MRC reported on success
The MRC reported that its flood control efforts had progressed to the point that most of the inhabitants of the Mississippi Valley were now safe from a 1927-caliber flood. Seventy-five percent of the bank revetment had been completed, and only 250 miles of main-line levees remained unfinished. In addition, the UMVD was abolished. St. Louis District transferred to the LMVD.
1960 Corps role in education and environment increased
To discourage further encroachment on the flood plains, Congress authorized the Corps of Engineers to compile and disseminate information on floods and flood damage, to identify areas subject to overflow, and to present general criteria for guidance in the use of flood plain areas. That same year, a memorandum from the Corps' Chief Counsel advised that the Fish and Wildlife Coordination Act could be used as a basis for denying a permit under the 1899 act solely for fish and wildlife reasons. The Corps' regulatory program entered into an environmental focus as a result of the memorandum.
1963 Completion of Old River control structures/regulatory program continued environmental shift
The Corps began operation of Old River control structures recommended in 1950 and authorized by Congress in 1954. The Corps also added consideration of fish and wildlife impacts in making permit decisions to the regulations; however, the decision on whether to issued a permit “must rest primarily upon the effect of the proposed work on navigation.
Dry conditions caused the Mississippi River on the St. Louis gage remain below 0.0 for 100 consecutive days from November 1963 to March 1964. The St. Louis area had experienced an accumulated precipitation deficit of 42 inches from 1952 to 1964. On 26 January 1963 the river dropped to –5.8 feet and on 1 January 1964 it reached –5.6 feet, both perilously close to the all-time low stage set in 1940.
1965 River and Harbor and Flood Control Act
This act authorized 150 Corps projects or project modifications at an estimated cost of $2 billion. A long-range master plan for stabilizing the Mississippi River between Cairo and Baton Rouge was adopted to facilitate the establishment of a twelve-foot channel depth.
1966 Bridge authorities transferred to the Coast Guard
The program transfer occurred in 1967 after being vested in the Corps for 79 years.
1969 National Environmental Policy Act
This act established a new philosophy to guide federal thinking and activities relative to the nation's natural environment. Most importantly, it established preparation of the environmental impact statement as an integral element of the Corps' pre-authorization process on all projects and permit-granting activities. In addition to the environmental impact of a proposed project, the Corps was to take into consideration the sociological, cultural, biological, demographic, and economic effects. In the process of producing such a statement, the Corps was to consult with local, state, and federal agencies, as well as concerned-citizen's groups and individuals to assure the broadest possible input into the impact statement.
1970 Refuse Act Permit Program initiated
Executive Order 11574 initiated the Section 13 (Rivers and Harbors Act of 1899) permit program, known as the Refuse Act Permit Program (RAPP) for controlling all discharges into navigable waters and their tributaries. The Corps administered RAPP, with oversight and decision authority vested in the EPA.
1972 Congress passed the Federal Water Pollution Control Act
Together with the Environmental Quality Improvement Act of 1970, FWPCA created guidelines affecting standards applied by the Corps in its environmental impact statements, as well as reinforcing Corps' perceptions of changing national priorities. The FWPCA amendments enacted Section 404, while Section 402 replaced RAPP.
1972 12-foot channel for upper Mississippi declared not feasible
The Corps of Engineers completed the Upper Mississippi River Comprehensive Basin Study. The Corps looked at the feasibility of establishing a 12-foot navigation channel on the Upper Mississippi , but declared it was not economically feasible.
1973 Mississippi River flood
The flood of 1973 caused damages estimated at $183,756,000. While larger metropolitan areas were protected, many river towns north and south of St. Louis were hit hard by six different flood crests between March 9 and May 25. Despite the fact that the river remained above flood stage for a then record-setting 77 consecutive days, Corps flood control measures prevented more damages than occurred for the first time.
1974 Lawsuits delay replacement of Locks & Dam No. 26
One day before the St. Louis District was to open bids on the first construction contracts for the Lock & Dam No. 26 Replacement, two separate lawsuits were filed by environmental groups and the railroad industry seeking injunctions to stop the construction of the replacement structure. Both suits were filed on the grounds that the authorization for the replacement project and the Corps' EIS were in violation of the Rivers and Harbors Act of 1909 and NEPA. U.S. district Judge Charles Richey issued a preliminary injunction halting construction of the project pending preparation of a new EIS and proper congressional authorization.
1977 The Corps issued the first Nationwide permits
FWPCA was renamed the Clean Water Act. Executive Order 11990 was issued. The order applied to minimizing the destruction, loss, or degradation of wetlands.
1978 Inland Waterways Act passed
Congress passed the Inland Waterways Act (PL 95-502), which authorized the Corps to construct a new dam and one new 1,200-foot lock as part of the Lock & Dam 26 Replacement Project. The Act stipulated that upon completion of a master plan for the entire Upper Mississippi River Basin , authorization for a second lock would be considered. The Act also established the Inland Waterways Trust Fund, a user fee for barge traffic.
1979 Section 404 authorities defined
The United States Attorney General ruled that the EPA, not the Corps, had the authority to determine the limits and exemptions of the Section 404 Program.
1986 Water Resources Development Act passed
Congress passed PL 99-662, the Water Resources Development Act of 1986 (WRDA 86). This law signified a major and enduring shift in the nation's attitude towards water resources planning. The legislation reflected general agreement that non-Federal interests can, and should, shoulder more of the financial and management burdens, that environmental considerations were intrinsic to water resources planning, and that marginal projects must be weeded out. The act authorized construction of a 600-foot auxiliary lock at the Melvin Price Locks and Dam and specified that half of the funding for the construction of the auxiliary lock be paid out of the Inland Waterways Trust Fund. WRDA 86 also established the Upper Mississippi River System Environmental Management Plan (EMP).
During 1988-1989, 44 daily low stage records were established at St. Louis , while 128 daily low stage records were established at Cape Girardeau . Despite the low water conditions, the navigation channel remained open to traffic owing largely to augmented flows from the Missouri River reservoirs, the slackwater system on the Upper Mississippi , channel improvements structures and dredging.
1990 Section 404 and mitigation requirements
The Corps signed an MOA with the EPA on the mitigation requirements for Section 404 permits. The MOA established the sequence of avoiding and minimizing impacts prior to mitigating for wetland losses caused by permit issuance.
1990 Coastal Wetland Planning, Protection Restoration Act passed
Coastal Louisiana loses over 900,000 acres since the 1930s. As late as the 1970s, the loss rate for Louisiana 's coastal wetlands was as high as 25,600 acres per year. The cumulative effect of human activities in the coastal area has been to drastically tilt the natural balance from the net land building deltaic processes to land loss due to altered hydrology, subsidence, and erosion. Approximately 30 percent of the land losses being experienced in coastal Louisiana are due to natural causes. The remaining 70 percent are attributable to man's effect on the environment, both direct and indirect. In 1990, passage of the Coastal Wetland Planning, Protection Restoration Act, (PL-101-646, Title 111, CWPPRA), locally referred to as the Breaux Act provided authorization and funding for a multi-agency task force to begin actions to curtail wetland losses.
1992 Upper Mississippi River Navigation Study initiated
The Corps of Engineers initiated a study to address potential economic looses to the nation for significant traffic delays at the aging locks and dams on upper Mississippi and Illinois river systems between 2000 and 2050.
1993 Massive flood on the upper Mississippi River
The Flood of 1993 was a hydro-meteorological event without precedent in modern times on the upper Mississippi River . In terms of precipitation amounts, record river levels, flood duration, area of flooding, and economic losses, it surpassed all previous floods in the United States . On August 1 the Mississippi River set a high water mark on the St. Louis gage at 49.58 feet and reached an all-time high in terms of flow at 1,070,000 cfs. The river remained above flood stage for a new-record 80 consecutive days and for a new-record 148 days during the calendar year. During the flood, the Federal flood control reservoir system stored over 17 million acre-feet of floodwater. None of this water reached St. Louis until after the August crest. These reservoirs are credited with reducing flood levels at St. Louis by about three feet. Despite the length, duration and height of the flood, all the levees/floodwalls built to urban design standards withstood the onslaught.
1994 Environmental Pool Management
The St. Louis District implemented the first ever Environmental Pool Management Plan. The concept of the plan involved pool drawdowns to correct adverse environmental impacts caused by artificially high water levels in the navigation pools, which diminished aquatic grasses and habitat. The pool drawdown successfully increased vegetative growth critical to alleviating the adverse impacts.
1997 Mississippi Valley Division established
The Lower Mississippi Valley Division was abolished with the establishment of the Mississippi Valley Division. The St. Paul , Rock Island , St. Louis , Memphis , Vicksburg , and New Orleans districts comprise the new division.
1998 Louisiana Coastal plan proposed
After extensive studies and construction of a number of coastal restoration projects accomplished under CWPPRA, the State of Louisiana and the Federal agencies charged with restoring and protecting the remainder of Louisiana 's valuable coastal wetlands adopted a new coastal restoration plan. The plan proposed ecosystem restoration strategies that would result in efforts larger in scale than any that have been implemented in the past.
2001 Navigation Study Restructured
Following charges of the Corps of Engineers restructured the Upper Mississippi River-Illinois Waterway System Navigation Study to address the ongoing cumulative effects of navigation and ecosystem restoration needs, with the goal of attaining an environmentally sustainable navigation system.
2001 SWAANC Decision restricts Section 404
The U.S. Supreme Court (SWANCC Decision) restricted the Corps regulatory jurisdiction under Section 404 to traditionally navigable waters, surface tributaries to such waters, and waters and wetlands adjacent to Section 10 waters and their tributaries. |
Significant figures are used to keep track of the quality (variability) of measurements. This includes propagating that information during calculations using the measurements. The purpose of this page is to help you organize the information about significant figures -- to help you set priorities. Sometimes students are overwhelmed by too many rules, and lack guidance about how to sort through them. What is the purpose? Which rules are most important?
The following points as being most important:
- Significant Digits relate to measurements. When you think about how many Significant Digits a number has, think about where the number came from.
- The most important rule for handling Significant Digits when doing calculations is the rule for multiplication.
I will de-emphasize the following:
- Whether zeroes are significant.
- The rules for handling Significant Digits in other types of calculations.
What if the advice given here disagrees with what your book or instructor says?
Let's break that into two parts. One is about the information per se, and the other is about priorities, about the approach to thinking about Significant Digits. The information here should agree, for the most part. However, what may be different is the order of presenting things, with a different perspective in the approach -- the steps -- to learning Significant Digits. We will all end up in the same place.
If you were completely happy with how the Significant Digits topic is presented in your own course, you probably wouldn't be reading this page. Think of it as another approach -- to the same thing. Sometimes, looking at things differently can help. Trying two approaches can be better than trying only one. There is no claim that one approach is "right" or even "better". If there is a discrepancy between any information here and your own course, please let me know -- or check with your own instructor. Some details are a matter of preference. In the lab. When you take a measurement, you record not only the value of the measurement, but also some information about its quality. Using Significant Digits is one simple way to record the quality of the information.
A simple and useful statement is that the significant figures (Significant Digits) are the digits that are certain in the measurement plus one uncertain digit.
Significant Digits is not a set of arbitrary rules. Almost everything about Significant Digits follows from how you make the measurements, and then from understanding how numbers work when you do calculations. Unfortunately, there are "special cases" that can come up with Significant Digits. If all the rules are presented together, it is easy to get lost in the rules. Better -- and what we will do here -- is to emphasize the logic of using Significant Digits. This involves a few basic ideas, which can be stated as rules. We will leave special cases for a while, so they do not confuse the big picture. The number of high priority rules about Significant Digits is small.
The best way to start with Significant Digits is in the lab, taking measurements. An alternative is to use an activity that simulates taking measurements -- of various accuracy. We will do that here, using drawings of measurement scales. A bad way to start with Significant Digits is to learn a list of rules.
How many significant figures does a measurement have?
When you take a measurement, you write down the correct number of digits. You write down the significant digits. That is, the way you write a number conveys some information about how accurate it is. It is up to you to determine how many digits are worth writing down. It is important that you do so, since what you write conveys not only the measurement but something about its quality. For many common lab instruments, the proper procedure is to estimate one digit beyond those shown directly by the measurement scale. If that one estimated digit seems meaningful, then it is indeed a significant digit.
Example 1: Reading a typical scale
The scale shown here is a "typical" measurement scale. The specific scale is from a 10 mL graduated cylinder -- shown horizontally here for convenience. The arrow marks the position of a measurement.
Glossary entry: Scale.
Our goal is to read the scale at the position of the arrow. Let's go through this in detail.
- The numbered lines are 1 mL apart.
- The little lines (between the numbered lines) are 0.1 mL apart.
- The arrow is clearly between 4.7 and 4.8 mL.
- We will estimate the position of the arrow to 1/10 the distance between the little lines, that is, to the nearest 0.01 mL. (It is a common rule of thumb to estimate the last digit to 1/10 the distance between the lines. This corresponds, of course, to writing one more digit "as best we can".)
- A reasonable estimate is 4.78 mL. Some people might say 4.77 mL or 4.79 mL. No one should say 4.72 mL! That is, the estimate is 4.78 mL to about +/- 0.01 mL. 4.78 mL is 3 Significant Digits; the last Significant Digits is not certain, but is "close".
How meaningful is a drawing of a measurement scale, such as the one in the example above? It illustrates one particular issue very well: how to read a scale per se, figure out what the marks and labels mean, and how to estimate the final digit. Real measuring instruments, such as graduated cylinders, have those issues. Depending on the situation, there may be other issues that affect the ease of reading. In the drawing above, the goal is to read a well-defined arrow. With a real graduated cylinder, you may need to deal with a meniscus (curved surface) and parallax. Those issues are beyond our topic here.
A final zero? In estimating that last digit, be sure to write down the zero if your best estimate is indeed zero. For example, if the last digit reflects hundredths of a mL, you might estimate in one case that there are 6 hundredths; thus you would write 6 as the last digit (e.g., 8.16 mL -- 3 Significant Digits). But you might (in another case) estimate that there are 0 hundredths; it is important that you write that zero (e.g., 8.10 mL -- 3 Significant Digits). That final zero says you looked for hundredths and found none. If you wrote only 8.1 mL (2 Significant Digits), it would imply that you did not look for hundredths.
Example 2: When the measurement seems to be "right on" a line.
The arrow below appears to be "right on" the "4.7" line. (Let's assume that. The point here is to deal with the case where you think the arrow is "on" the line.) Thus we estimate that the hundredths place is 0. The proper reading, then, is 4.70 mL (3 Significant Digits). That final zero means that we looked for hundredths, and found none. If we wrote 4.7 mL (2 Significant Digits), it would imply that we didn't look for hundredths.
The scale shown in Example 2 is the same scale as in Example 1. In Example 1 our proper reading had 3 Significant Digits. That is also true in Example 2. That final 0 in Example 2 is an estimate; it is entirely equivalent to the final 8 estimated in Example 1.
When I look at a measurement that someone else has given me, how can I tell how many significant figures it has?
There are a couple of ways to approach this:
- You can look at the number and analyze the digits, using your rules for Significant Digits.
- You can think about the measurement scale that resulted in this measurement. Think about how the scale was read, with one digit being estimated.
Both approaches will work. They reflect the same principles. Often, simply looking at the number will be sufficient. However, when you are not sure, it helps to go back to basics: think about the underlying measurement. We will illustrate this in the next section, on zeroes -- the situation most likely to cause confusion.
What about the zeroes? Are they significant or not?
We tend to spend more time on this issue than it really is worth. Only one tenth of all digits are zeroes, yet the bulk of a list of Significant Digits rules may be about how to treat the zeroes. Many zeroes are clear enough, but indeed it can take a bit of thought to decide whether some zeroes are or are not significant.
If you understand where Significant Digits come from, then whether a zero is significant should be clear -- at least most of the time. If you are learning Significant Digits by memorizing rules, then you are doing it the hard way -- not understanding the meaning. If, for whatever reason, you are struggling with Significant Digits, the problem of the zeroes is a low priority problem.
Here is what I usually suggest to students. Don't worry too much about the rules for zeroes, especially when you are just starting. As you go on, ask about specific cases where you are not sure about the zeroes. That way, you will gradually learn how to deal with the zeroes, but not get bogged down with what can seem to be a bunch of picky rules.
The key point in deciding whether a zero is significant is to decide if it is part of the measurement, or simply a digit that is there to "fill space". The next section will help with much of the "zeroes problem".
Why is scientific notation helpful?
When a number is written in standard scientific (exponential) notation format, there should be no problem with zeroes. In this format, with one digit before the decimal point and only Significant Digits after the decimal point, all digits shown are significant.
Example 3: Scientific Notation
How many Significant Digits are in the measurement 0.00023456 m?
In scientific notation that is 2.3456x10-4 m. 5 Significant Digits. Scientific notation makes clear that all the zeroes to the left are not significant. The first zero is just decorative and could be omitted; the others are place-holders, so you can show that the 2 is the fourth decimal place.
The "rule" that covers this case may be stated: zeroes on the left end of a number are not significant -- regardless of where the decimal point is. Hopefully, the example, showing how this plays out in scientific notation, makes this rule clearer.
Example 4: Scientific Notation
How many Significant Digits are in the measurement 0.00023450 m?
In scientific notation that is 2.3450x10-4 m. 5 Significant Digits. That final zero is part of the measurement. If it weren't, why would it be there?
The "rule" that covers this case may be stated: zeroes on the right end of a number are significant -- if they are to the right of the decimal point. This rule may seem confusing in words, but showing the case in scientific notation should make it clearer.
Example 4: Scientific Notation
How many Significant Digits are in the measurement 234000 m?
In scientific notation that is ... Hm, what is it? It's not really clear. Let's suggest that it is 2.34x105 m. That is clearly 3 Significant Digits.
Why did I choose to not consider the zeroes significant? Maybe they are significant. Or maybe one of them is significant. The problem is that there is no way to tell from the number 234000 whether those zeroes are significant or are merely place holders, telling us (for example) that the 4 is in the thousands place. So why choose to make them not significant? First, that is the conservative position. I don't know whether they are significant, and to claim that they are is an unwarranted claim of quality. Second, 3 Significant Digits is reasonable -- a common way to measure distances; 6 Significant Digits is not likely. What if the person making the measurement knows that the measurement is good to 4 Significant Digits, with the first zero being significant? Then, somehow, they need to say so. One good way is to put the measurement in proper scientific notation in the first place: 2.340x105 m, 4 Significant Digits.
How do I handle significant figures in calculations?
It depends on the type of calculation. Each math operation has its own rules for handling Significant Digits. More precisely, there is one rule each for:
- multiplication and division (which are basically the same thing, so they share a rule);
- addition and subtraction (which are basically the same thing, so they share a rule);
- logs and antilogs (which are basically the same thing, so they share a rule).
Those three rules are distinct; you must be careful to use the right rule for the right operation. But there is good news: The multiplication rule is by far the most important in basic chemistry -- and it is perhaps also the simplest. So, as a matter of priority, emphasize the multiplication rule. When you have mastered it, you can go on and learn the addition rule. It is useful, though much less important. Whether you need the rule for logs will depend on your course; some courses manage to avoid this rule completely.
In summary ... there are three rules, but there is a clear set of priorities with them. Emphasize the multiplication rule. It is the most important rule, and the easiest one.
If you multiply two numbers with the same number of Significant Digits, then the answer should have that same number of Significant Digits. If you multiply together two numbers that each have 4 Significant Digits, then the answer should have 4 Significant Digits.
Multiply 12.3 cm by 2.34 cm.
Doing the arithmetic on the calculator gives 28.782. In this case, each number has 3 Significant Digits. Thus we report the result to 3 Significant Digits. Proper rounding of 28.782 to 3 Significant Digits gives 28.8. With the units, the final answer is 28.8 cm2.
If you multiply together two numbers with different numbers of Significant Digits, then the answer should have the same number of Significant Digits as the "weaker" number. Hm, that is a lot of words. An example should help. Multiply a number with 3 Significant Digits and a number with 4 Significant Digits. Keep 3 Significant Digits in the answer.
Multiply 24 cm by 268 cm.
Doing the arithmetic on the calculator gives 6432. One measurement has 2 Significant Digits and one has 3 Significant Digits. The 2 Significant Digits number is "weaker": it has less information; it has only two digits of information in it. That is, the 2 Significant Digits number limits the calculation. Thus we report the result to 2 Significant Digits. Proper rounding of 6432 to 2 Significant Digits gives 6400. That is clearer in scientific notation, as 6.4x103. With the units, the final answer is 6.4x103 cm2. [Recall section Why is scientific notation helpful?, especially Example 5.]
The following two examples serve as reminders that it is important to understand the context of the particular problem. In Example 7, we reported the product of 24 & 268 to 2 Significant Digits. But in Example 8, which follows, we report the product of those same two numbers to 3 Significant Digits. Both are correct -- because the contexts are different. Example 9 reminds us of another issue in carefully recording measurements.
You have an object that is 268 cm long. What would be the total length of 24 such objects?
The calculator gives 6432, as in Example 7. Now we look at the Significant Digits; we must carefully think about what each number means. "268 cm" is an ordinary measurement; it has 3 Significant Digits. But the "24" is a count, and is taken as exact (with no uncertainty). That is, the "24" does not limit the calculation, and we report 3 Significant Digits. With the units, the final answer is 6.43x103 cm.
You measure the sides of a rectangle. The sides are 28.2 cm and 25 cm. What is the area? But before you calculate the area... There is probably something wrong with the statement of this question. What?
What's wrong? Well, we have an object, approximately square. Someone has measured two sides. One would think they used the same measuring instrument -- the same ruler. But the two reported measurements are inconsistent. One is reported to the nearest cm, and one is reported to the nearest tenth. That is suspicious. Why were they not reported the same way?
The purpose of this example is to remind you of the importance of reading the measuring instrument carefully and consistently, and recording the final zero if indeed that is your estimate. There is no need to carry out the calculation in this case.
- The position of the decimal point is irrelevant in determining Significant Digits for multiplication. Just count how many Significant Digits there are.
- We discussed the multiplication rule for the case of multiplying two numbers. If there are more than two numbers, the rule is the same. You can think of this as multiplying two numbers at a time.
- Multiplication and division are basically the same operation. Dividing by "x" is equivalent to multiplying by "1/x". The rule for Significant Digits is the same for multiplication and division, and for operations involving any combination of them.
- Ordinary calculators have no idea about Significant Digits at all. They may give you too many digits or too few digits. Use the calculator to do the arithmetic, but then you take responsibility for the Significant Digits.
For students who are just starting chemistry, the addition rule for Significant Digits is not as important as the multiplication rule. The intent of that statement is to help you set priorities. Learn one thing at a time -- especially if you are finding the topic difficult. The multiplication rule is more important; learn it first and get comfortable with it.
Most instructors will want you to learn the addition rule. I am not suggesting otherwise. Again, the emphasis here is to guide you to learn one thing at a time.
Here is an example of a basic chem situation that would seem to involve the addition rule, yet where using that rule is not really needed. Consider calculating the molar mass (formula weight) of a compound, say KOH. Using the atomic masses shown on the periodic table, the molar mass of KOH is 39.10 + 16.00 + 1.008 = 56.108 (in g/mol).
So, how many Significant Digits do we keep?
One answer might be to use the Significant Digits rule for addition and note that the result is only good to the hundredths place. Therefore, we round it to 56.11 g/mol.
However, that may be unnecessary -- and even undesirable. The reason for calculating a molar mass is to use it in a real calculation. In real cases, it is usually fine to calculate molar mass by using the atomic masses shown on your periodic table. No rounding, at least now. When you use the molar mass for a calculation, you round the final result. At this step, you should -- in principle -- consider the quality of the molar mass number. However, in practice, it is likely to not matter. It is most likely -- especially in beginning chemistry -- that the Significant Digits of the final result will be limited by other parts of the calculation, not the molar mass.
Therefore, I encourage beginning students to use the procedure above... Use all the digits of the atomic weights shown on their periodic table. Just add them up, and use the molar mass you get. Don't round the molar mass. Round the final result for the overall calculation, assuming that the molar mass Significant Digits is not a concern. This is usually fine, and lets you worry about the addition rule a bit later.
Now, it is easy enough for the textbook to make up problems where the above method would not be satisfactory. My point is that such cases are uncommon in real problems, especially in introductory chemistry. In fact, a simple example of a question is "Calculate the molar mass of ... [some chemical]." How many Significant Digits do you report? Well, you'll need to use the addition rule for Significant Digits. But that is an artificial question; in the real world one almost always wants to know a molar mass in the context of a specific calculation involving some measurement, and it is quite likely that the measurement will limit the quality of the result.
The logarithm of 74 is 1.87. (We will use base 10 logs here, but the Significant Digits rule is the same in any case.) 74 has 2 Significant Digits, and the log shown, 1.87, has 2 Significant Digits. Why? Because the 1 in the log (the part before the decimal point -- the "characteristic") relates to the exponent, and is an "exact" number.
Whoa! What exponent? Well, it will help to put the number in standard scientific notation. 74 is 7.4x101. Now consider the log of each part: the log of 101 is 1, an exact number; the log of 7.4 is 0.87 -- with a proper 2 Significant Digits. Add those together, and you get log 74 = 1.87 -- with 2 Significant Digits.
Log of 740,000? That is log of 7.4x105. 5.87. In scientific notation only the exponent is different from the previous number; therefore in the logarithm, only the leading integer is different.
This log rule is often skipped in an intro chem course for a couple of reasons. First, logs may come up only once, with pH. Second, students in an intro chem course often are weak with using exponents -- and may not have learned about logs at all. So, sometimes one just suggests that pH be reported to two decimal places -- a usable if rough approximation.
Should I round off to the proper number of significant figures at each step?
The short answer is "no".
It is common now that most calculations are done on a calculator. Just do all the steps with the calculator, letting the machine keep track of the intermediate results. There is no need to even write down intermediates, much less round them. Why avoid rounding at each step? Each time you round, you are throwing away some information. If you do it over and over, it gets worse and worse; you accumulate rounding errors -- and that is not so good.
Imagine that we want to calculate 1.00 * (1.127)10. For our purposes here, the numbers are measurements, and we are to give the answer with proper Significant Digits. Proper Significant Digits in this case is 3 (because 1.00 is 3 Significant Digits). (For a clarification, see * note at end of this example box.)
We might consider two ways to do this:
- Do the indicated calculation; then, at the end, round to 3 Significant Digits. This gives 3.31.
- First round the 1.127 to 3 Significant Digits: 1.13. Now do the calculation and round the answer to 3 Significant Digits. This gives 3.39.
Well, those two calculations give answers that are quite different! How can we judge them? Here is one approach... The original number 1.127, by convention, means 1.127 +/- 0.001. That is, this measurement might be 1.126 to 1.128. If we do the calculation with 1.126, we get 3.28. If we do the calculation with 1.128, we get 3.34. Thus it seems that the result should be in the range of those two numbers, 3.28-3.34. In fact, method 1 (calculate with the original number and round only at the end) gives 3.31 -- which is in the middle of that range. However, method 2 (round first), gives 3.39 -- which is outside the range, by quite a bit. The reason should be clear enough in this example: we have rounded "up" ten times, and thus biased the result upwards. This is an example of how rounding errors can accumulate. It is better to round only at the end.
At the start of this example we said that the proper number of Significant Digits in this case was 3. As we went on, we found that the range of possible answers was 3.28-3.34, or 3.31 +/- 0.03. Obviously, this means that stating the answer as 3.31, to 3 Significant Digits with an implication of +/- 0.01, is not so good. This illustrates a limitation of Significant Digits; it is not so good when there are many error terms to keep track of (10, in this case). The main point of this example was to show the effect of compounding rounding errors -- hence the desirability of not rounding off at intermediate stages. (For more about such limitations of Significant Digits, see the section below: Limitations and complications of Significant Digits.)
The discussion of Significant Digits when adding up atomic weights to calculate a molecular weight, in the section Significant figures in addition, is consistent with this point. The question of how to round when the final digit is a 5 -- or at least appears to be a 5 -- is discussed below in the Special cases section on Rounding: What to do with a final 5.
How many Significant Digits do conversion factors have? Well, it depends. Conversion factors within the metric system, i.e., involving only metric prefixes, are exact. Similarly, conversion factors between large and small units within the American system (e.g., 12 inches per foot, are exact). Conversion factors between metric and American systems are typically not exact, and it is your responsibility to try to make sure you use a conversion factor that has enough Significant Digits for your case. It is generally not good to allow a conversion factor to limit the quality of a calculation.
The conversion factor between centimeters and inches, 2.54 cm = 1 inch, is exact -- because it has been defined to be exact. If you convert 14.626 cm to inches, at 2.54 cm/inch, you can properly report the result as 5.7583 inches -- 5 Significant Digits, like the original measurement -- because the conversion factor is exact.
Many conversion factors we use in chemistry relate one property to another. Examples are density (mass per volume, g/mL) and molar mass (mass per mole, g/mol). These conversion factors are based on measurements, and their Significant Digits must be considered. It is your responsibility to think about the Significant Digits of a conversion factor. The best approach is usually to think about where the number came from. Is it a definition? a measurement?
Limitations and complications of Significant Digits
Using Significant Digits can be a good simple way to introduce students to the idea of measurement errors. It allows us to begin to relate the measurement scale to measurement quality, and does not require much math to implement. However, Significant Digits are only an approximation to true error analysis, and it is important to avoid getting bogged down in trying to make Significant Digits work well when they really don't.
One type of difficulty with Significant Digits can be seen with reading a scale to the nearest "tenth". (The scale shown with Example 1 illustrates this case.) In this case, 1.1 and 9.1 are both proper measurements. If we assume for simplicity that each measurement is good to +/- 0.1, the uncertainty in the first measurement is about 10% and the uncertainty in the second measurement is about 1%. Clearly, simply saying that both numbers are good to two Significant Digits is only a rough indication of the quality of the measurement.
Further, Significant Digits does not convey the magnitude of the reading uncertainty for any specific scale. The common statement, which I used in the previous paragraph, is that readings are assumed to be good to 1 in the last place shown. But on some scales, it would be much more realistic to suggest that the uncertainty is 2 or even 5 in the last place shown. A similar problem can occur when the errors from many numbers are accumulated in one calculation. Example 10 illustrated this.
Another limitation of Significant Digits is that it deals with only one source of error, that inherent in reading the scale. Real experimental errors have many contributions, including operator error and sometimes even hidden systematic errors. One cannot do better than what the scale reading allows, but the total uncertainty may well be more than what the Significant Digits of the measurements would suggest.
I have found that, even in introductory courses, some of the students will realize some of these limitations. When they point them out to me, I am happy to compliment them on their understanding. I then explain that Significant Digits is a simple and approximate way to start looking at measurement errors, and assure them that more sophisticated -- but more labor-intensive -- ways are available.
Scale Reading: Digital instruments
Some modern measuring instruments have a digital scale. Electronic balances are particularly common. How do you know how many Significant Digits to write down from a digital scale? Good question. Most such instruments will display the proper number of digits. However, you should watch the instrument and see if that seems reasonable. Remember that we usually estimate one digit beyond what is certain. With a digital scale, this is reflected in some fluctuation of the last digit. So if you see the last digit fluctuating by 1 or 2, that is fine. Write down that last digit; you should try to write down a value that is about in the middle of the range the scale shows.
If the fluctuation is more than 2 or so in the last digit, it may mean that the instrument is not working properly. For example, if the balance display is fluctuating much, it may mean that the balance is being influenced by air currents -- or by someone bumping the bench. Regardless of the reason, a large fluctuation may mean that a displayed digit is not really significant.
Scale Reading: Volumetric pipets or volumetric flasks
These measuring instruments have only one calibration line. You adjust the liquid level to the calibration line -- as close as you can; you then have the volume that is shown on the device. A 10 mL volumetric pipet measures 10 mL; that is the only thing it can do. So, how many Significant Digits do we report in such a measurement? Obviously the usual procedures for determining Significant Digits are not applicable.
One key determinant of the quality of a measurement with a volumetric pipet is the tolerance -- the accuracy of the device as guaranteed by the manufacturer. The tolerance may be shown on the instrument; if not, it can be obtained from the catalog or other reference source.
There is no necessary relationship between the tolerance and measurement error. However, it turns out that these instruments have been designed so that the tolerance is close to the typical measurement error. Thus, as an approximation, but a useful one, one can treat the stated tolerance as the measurement error. As a rule of thumb, high quality ("Class A") volumetric glassware will give 4 Significant Digits measurements. (In contrast, ordinary glassware will give about 3 Significant Digits at best.) Of course, this assumes that the instrument is being used by trained personnel. In serious work, one would take care to measure actual experimental errors.
Rounding: What to do with a final 5
There are two points to be made here. The first is to make sure that the final 5 really is a final 5. And then, if it is, what to do.
Is the final 5 really a final 5? This might seem to be simple enough, but with common calculators it is easy to be misled. Calculators know nothing about Significant Digits; how many digits they display depends on various things, including how you set them. It is easy for a calculator to mislead you about a final 5. For example, imagine that the true result of a calculation is 8.347, but that the calculator is set to display two decimal places (two digits beyond the decimal point). It will show 8.35. If you want 2 Significant Digits, you would be tempted to round to 8.4. However, that is clearly incorrect, if you look at the complete result 8.347, which should round to 8.3 for 2 Significant Digits. How do you avoid this problem? If you see a final 5 that you want to round off, increase the number of digits displayed before making your decision.
What to do if you really have a final 5. There are two schools of thought on this.
- Some people will suggest that you always round a final 5 up.
- Others will suggest that you round it up and down each half of the time; the usual way to do this is to round a final 5 to make the previous digit an even number. For example, 0.35 becomes 0.4 and 0.65 becomes 0.6.
What should you do? Well, this is really a rather arcane point, not worth much attention. If your instructor prefers a particular way, do it. It really is not a big deal, one way or the other. If you are looking to decide your own preferred approach, I'd suggest you read a bit about what various people suggest, and why. If you just want my opinion, well, I suggest "rounding even". |
Ask any person on the street how many dimensions are there and, hopefully, they’ll say that there are at least three spatial dimensions (length, width, and depth), with the addition of a temporal dimension (time). Asking a physicist the same question, however, might blow your mind. For instance, theoretical physicists that work in string theory claim the universe is made up of at least 10 spatial dimensions, with the math to back them up.
The visible 3-D reality
The three spatial dimensions — length, width, and height (or depth) — are pretty straightforward. With these dimensions, you can pinpoint your exact physical location at any given moment.
One-dimensional (1-D) space can be visualized as a single bead on a thread. You can slide the bead forward or backward, but really all you need is a single value to determine its position in this dimension, which is length. One-dimensional space has no other discernible qualities besides length. In two-dimensional (2-D) space, you need two sets of coordinates to determine the location of a point. It’s like the bead is now in a mesh, where it can slide not only forward and backward but also sideways. Finally, in three-dimensional (3-D) space, depth allows us to slide the bead up and down on a multi-threaded mesh.
In geometric terms, 1-D is a line, 2-D is a square, and 3-D is a cube.
Beyond the three dimensions
Time is considered to be the fourth dimension. However, it is not a spatial dimension. We need time to locate objects in the observable universe because everything is in motion. In relativistic space, Einstein added time to the three classical dimensions of space. Mathematically, these four dimensions are bound together into what is commonly referred to as spacetime. This was a huge leap of thought that went beyond mathematical formalism. For instance, it is only in such a 4-D model of nature that electromagnetism can be fully and accurately described.
But are there more than three spatial dimensions? That’s a challenging question because our minds are designed to perceive only length, width, and height. Some scientists who subscribe to string theory claim that there’s more to reality than meets our puny mammalian eye.
Our knowledge about the subatomic composition of the universe is summarized in what is known as the Standard Model of particle physics. The Standard Model describes both the fundamental building blocks out of which everything is made and the forces through which these blocks interact. There are twelve basic building blocks that we know of (six quarks and six leptons) and four fundamental forces (gravity, electromagnetism, and the weak and strong nuclear forces). Each fundamental force is produced by fundamental particles that act as carriers of the force. For instance, the photon, which is a particle of light, is the mediator of electromagnetic forces.
The behavior of all of these particles and forces is described with the utmost precision by the Standard Model, with one notable exception: gravity. It’s just proven extremely challenging to describe gravity microscopically. This is one of the most important problems in theoretical physics today — finding a quantum theory of gravity.
String theory attempts to solve this conundrum by unifying two theories that describe how the universe works: general relativity and quantum mechanics. For this reason, it is sometimes called the ‘Theory of Everything.’
Within this theoretical framework, all the fundamental particles of the Standard Model are replaced by one-dimensional objects called strings. Each string corresponds to the four large-scale dimensions of spacetime, which are described by general relativity, plus an extra six ‘compact’ dimensions (one for electromagnetism and five for the nuclear forces).
The reason why we cannot detect these speculative extra dimensions is that these may be too “compact”, in the sense that they may be too small for us to detect them. Conversely, another explanation is that the dimensions are too “large”, which restricts our perspective to a 4-dimensional surface within a higher-dimensional universe or multiverse.
One way to visualize the extra six dimensions is in the form of a Calabi–Yau manifold, in which the extra dimensions curl up around each other, becoming so tiny that they’re extremely hard to detect. These manifolds retain the symmetry between left and right-handed particles and preserve supersymmetry just enough to replicate certain aspects of the Standard Model. There are tens of thousands of possible Calabi-Yau manifolds for six dimensions, and string theory offers no reasonable means of determining which is the right one.
There are various versions of string-theory equations describing 10-dimensional space. However, in the 1990s, a mathematician named Edward Witten at the Institute for Advanced Study in Princeton proposed that String Theory could be simplified if we glanced it from an 11-dimensional perspective. This theory is called the M-Theory. What’s more, according to the Bosonic string theory, there are up to 26 dimensions.
It should also be said that, to date, there is no direct experimental evidence that string theory itself is the correct description of nature. The jury is still out while physicists are having a lot of fun poking into the fabric of reality itself.
Enjoyed this article? Join 40,000+ subscribers to the ZME Science newsletter. Subscribe now! |
The F5 gene, also known as Factor V gene, is responsible for the production of coagulation factor V, a protein involved in the blood clotting system. Mutations in the F5 gene can lead to various forms of thrombophilia, a rare blood clotting disorder.
Thrombophilia is characterized by an increased risk of developing abnormal blood clots in the bloodstream. Heterozygous mutations in the F5 gene, such as the Factor V Leiden variant, are the most common genetic cause of thrombophilia. These mutations result in a hyperactive form of Factor V, leading to an increased clotting tendency.
Testing for mutations in the F5 gene is available through genetic testing laboratories and can help identify individuals at risk of developing blood clots. Additional information on these genetic variants and their association with thrombophilia can be found in scientific articles and databases such as OMIM (Online Mendelian Inheritance in Man) and PubMed.
The F5 gene is one of several genes involved in the blood clotting system. Other genes implicated in thrombophilia include F2 (the gene for prothrombin) and genes encoding proteins such as antithrombin and protein C. Understanding the genetic factors contributing to thrombophilia is essential for developing better diagnostic tools and targeted therapies for related disorders.
In conclusion, the F5 gene plays a crucial role in blood clotting, and mutations in this gene can lead to thrombophilia. Testing for F5 gene mutations is available, and further information on these genetic variants can be found in scientific databases and articles. Studying the F5 gene and other genes related to clotting disorders can provide important insights into genetic factors influencing thrombophilia and help improve patient care and management of these conditions.
Health Conditions Related to Genetic Changes
Genetic changes in the F5 gene can lead to the development of various health conditions. These changes can be caused by a combination of genetic and environmental factors. Genetic testing can help identify these changes and assess the risk of developing certain conditions.
Administrative costs currently make up a major chunk of healthcare spending, especially in America. In fact, healthcare administrative spending accounts for 8% of the GDP in the U.S., or more than $1.485 trillion if looking at 2016 data. The cost of healthcare administration in other nations is just 3% of the GPD, on average, according to healthcare revenue news source RevCycleIntelligence.
One of the most well-known health conditions related to genetic changes in the F5 gene is thrombotic disorders, such as deep vein thrombosis (DVT) and pulmonary embolism. These conditions are usually caused by a specific genetic variant called factor V Leiden (FVL). Individuals who are heterozygous for the FVL variant have an increased risk of developing these thrombotic disorders.
In addition to thrombotic disorders, genetic changes in the F5 gene can also be associated with other forms of thrombophilia and related clotting disorders. Tests are available to detect these genetic changes and assess the risk of developing such conditions. References to scientific articles and resources about these conditions can be found in the catalog of the Online Mendelian Inheritance in Man (OMIM).
It is important to note that genetic changes in the F5 gene are relatively rare, and not all individuals with these changes will develop thrombotic or clotting disorders. The presence of genetic mutations does not guarantee the development of health conditions, as other factors, such as lifestyle and overall health, also play a role in disease development.
In summary, genetic changes in the F5 gene can lead to the development of thrombotic disorders and other clotting-related conditions. Genetic testing and information from scientific articles and resources can provide valuable insights into the risk and management of these health conditions.
Factor V deficiency
Factor V deficiency, also known as Factor V Leiden, is a genetic disorder that affects the clotting system. It is named after the gene responsible for producing Factor V, a protein involved in the formation of blood clots. Mutations in the F5 gene can result in functional changes in the Factor V protein, leading to thrombotic conditions.
The Factor V Leiden variant is one of the most common genetic risk factors for thrombophilia, a condition characterized by an increased tendency to develop abnormal blood clots. Other rare forms of Factor V deficiency are also associated with thrombotic disorders.
Testing for Factor V deficiency includes genetic tests to identify specific mutations in the F5 gene, as well as functional tests to assess the activity of the Factor V protein. These tests are available in specialized laboratories and healthcare settings. It is important to note that not all individuals with Factor V deficiency will develop thrombosis.
References to scientific articles, databases, and resources related to Factor V deficiency and thrombophilia can be found on PubMed, OMIM, and other online catalogs. Additional information on genetic and functional testing, as well as the management of thrombotic conditions, can be found in various health resources.
Factor V Leiden thrombophilia
Factor V Leiden thrombophilia is a rare genetic disorder that increases the risk of developing abnormal blood clots in veins. The disorder is caused by mutations in the F5 gene, which is responsible for encoding factor V, a protein involved in the blood clotting system.
Factor V Leiden thrombophilia is characterized by a specific mutation in the F5 gene, known as the Factor V Leiden variant. This variant makes factor V resistant to being inactivated by thrombin, a key enzyme in the blood clotting process. As a result, individuals with this variant have a higher risk of developing thrombosis, or the formation of blood clots.
Heterozygous Factor V Leiden thrombophilia, which means having one copy of the mutated F5 gene, is the most common form of the disorder, affecting about 5% of the general population. Homozygous Factor V Leiden thrombophilia, in which both copies of the F5 gene are mutated, is much rarer and associated with a higher risk of thrombosis.
Factor V Leiden thrombophilia can be diagnosed through genetic testing, which looks for the presence of the F5 gene mutation. Testing for this disorder is often performed in individuals with a personal or family history of thrombosis or other clotting disorders.
People with Factor V Leiden thrombophilia may be more prone to developing blood clots in certain situations, such as during pregnancy or when taking hormonal contraceptives. Therefore, it is important for individuals with this condition to be aware of the potential risks and discuss them with their healthcare providers.
Additional information on Factor V Leiden thrombophilia and related disorders can be found in scientific articles, databases, and resources such as OMIM, PubMed, and the Genetic Testing Registry. These resources provide information on the genetic variants, functional changes, and other factors related to this condition.
Genetic testing for Factor V Leiden thrombophilia is available and can be used to assess an individual’s risk of developing thrombosis. However, it is important to note that the presence of the Factor V Leiden variant does not guarantee the development of thrombosis, and other genetic and environmental factors can also contribute to the risk.
Testing for mutations in other clotting and prothrombin genes may also be recommended in certain cases, as these factors can interact with Factor V Leiden thrombophilia and further increase the risk of thrombosis.
In summary, Factor V Leiden thrombophilia is a genetic condition that increases the risk of thrombosis. The disorder is caused by mutations in the F5 gene, specifically the Factor V Leiden variant. Genetic testing and additional evaluation can help identify individuals at risk and guide appropriate management and preventive measures.
There are additional genetic variants and changes in the F5 gene that are associated with rare inherited clotting disorders. These variants are less common than the Factor V Leiden mutation and are usually found in combination with other related mutations.
OMIM is a database that catalogs the genetic variants and associated health conditions. The F5 gene is listed in the OMIM database as being associated with various thrombophilia (blood clotting) conditions.
There are several other databases available for testing and obtaining information on these genetic variants and their association with thrombotic disorders. PubMed is a popular scientific resource for accessing articles and references on genetic factors related to thrombophilia.
Prothrombin gene mutation is another genetic variant associated with thrombotic disorders. The gene, also known as Factor II, is involved in the clotting system. Mutations in the prothrombin gene can lead to an increased risk of developing conditions such as deep vein thrombosis.
Testing for the presence of these mutations is available, and it is recommended for individuals with a family history of thrombotic disorders or those who have experienced blood clotting events at a young age.
In summary, in addition to the Factor V Leiden mutation, there are other genetic variants and changes in the F5 gene that are associated with thrombotic disorders. Testing and resources for obtaining information on these variants are available to better understand and manage these conditions.
Other Names for This Gene
The F5 gene is also known by other names:
- Factor V gene
- Coagulation factor V gene
- F5 AGA multiple cloned gene (Factor V)
- Proaccelerin gene
- Thrombophilia gene
- Factor V Leiden gene
These names are used to refer to the F5 gene in various scientific and health resources, registries, and databases. The F5 gene plays a crucial role in the clotting system, and mutations or variants in this gene can lead to thrombotic disorders, thrombophilia, and other clotting-related conditions.
Additional Information Resources
- Scientific Articles: There are several articles that provide valuable information on the F5 gene and related topics. Some of these articles include:
- Castoldi et al. (2007) – “Functional Variant of FACTOR V Cleaved by Thrombin Activates Factor V and Impairs Thrombin Generation.” This article discusses the functional changes in factor V caused by thrombin cleavage.
- Rosing et al. (2004) – “Heterozygous Factor V Leiden Thrombophilia: A Risk Factor for Developing Thrombotic Disorders.” This article explores the relationship between heterozygous factor V Leiden and the development of thrombotic disorders.
- Genetic Databases: There are several genetic databases where you can find more information on the F5 gene and its variants:
- OMIM (Online Mendelian Inheritance in Man): This database provides comprehensive information on genetic conditions and genes, including the F5 gene.
- PubMed: PubMed is a resource where you can find scientific articles on various topics, including the F5 gene and its related conditions.
- Thrombophilia Testing: If you suspect you have a factor V deficiency or other clotting disorders, there are tests available to determine your risk. Some of the available tests include:
- Thrombophilia DNA Testing: This test analyzes your DNA for mutations or changes in the F5 gene and other genes involved in the clotting system.
- Prothrombin Time Test: This test measures how long it takes for your blood to clot and can help identify clotting disorders.
- F5 Leiden Mutation Test: This specific test checks for the presence of the F5 Leiden mutation, which is a known risk factor for thrombophilia.
- Thrombophilia Registry: The Thrombophilia Registry is a catalog of individuals with a known F5 gene variant or other genetic factors associated with thrombophilia. It provides a valuable resource for researchers and healthcare professionals.
Tests Listed in the Genetic Testing Registry
Genetic testing for the F5 gene, also known as the thrombin gene, can provide valuable information about an individual’s risk for developing thrombotic disorders. This gene encodes for a protein called coagulation factor V, which plays a crucial role in the clotting system.
There are several tests listed in the Genetic Testing Registry (GTR) that focus on different aspects of the F5 gene and its variants. These tests can help identify mutations or changes in the gene that may be associated with an increased risk of thrombophilia, a condition characterized by abnormal blood clotting.
Some of the tests listed in the GTR include:
- F5 Leiden Mutation Detection: This test detects a specific variant of the F5 gene, known as the F5 Leiden mutation, which is associated with an increased risk of clotting disorders.
- F5 Leiden Heterozygous and Homozygous Detection: This test determines whether an individual has one copy (heterozygous) or two copies (homozygous) of the F5 Leiden variant, which can further affect their risk of thrombosis.
- F5 Functional Assay: This test assesses the activity of the coagulation factor V protein produced by the F5 gene, providing insight into its functional role in the clotting system.
- F5 Deficiency Detection: This test looks for mutations or changes in the F5 gene that may result in a deficiency of coagulation factor V, which can contribute to bleeding disorders.
These tests can be instrumental in identifying individuals at risk for thrombotic disorders and guiding appropriate medical interventions. However, it is important to consult with healthcare professionals to understand the implications of the test results and the recommended management strategies.
Additional information on these tests and related variants of the F5 gene can be found in scientific articles, databases, and resources such as the Online Mendelian Inheritance in Man (OMIM), PubMed, and the Genetic Testing Registry.
References and resources:
- Castoldi, E., & Rosing, J. (2021). The F5 Leiden Mutation: Thrombosis Risks and Clinical Management. Clinical Epidemiology, 13, 507–518. DOI: 10.2147/CLEP.S267024
- Online Mendelian Inheritance in Man (OMIM). Retrieved from https://www.ncbi.nlm.nih.gov/omim
- PubMed. Retrieved from https://pubmed.ncbi.nlm.nih.gov/
- Genetic Testing Registry (GTR). Retrieved from https://www.ncbi.nlm.nih.gov/gtr
Disclaimer: This article provides general information about genetic testing for the F5 gene and should not be used as a substitute for medical advice. Please consult with a healthcare professional for personalized guidance on genetic testing and its implications for your health.
Scientific Articles on PubMed
PubMed is a widely used database that provides access to a vast collection of scientific articles. It is a valuable resource for researchers and healthcare professionals seeking information on various genetic factors, such as clotting disorders and other conditions related to thrombotic events. PubMed offers a comprehensive catalog of articles that cover a wide range of topics, including genetic variants, mutations, and other factors contributing to the development of thrombophilia and related disorders.
In the context of the F5 gene, PubMed provides access to a variety of articles discussing the genetic variants associated with thrombophilia. These variants, such as the F5 Leiden variant, have been extensively studied and are well-documented in scientific literature. Researchers have identified the heterozygous form of the F5 Leiden variant as a significant risk factor for developing thrombosis.
PubMed is a valuable resource for finding scientific articles that explore the functional aspects of genes and their relation to clotting disorders. By leveraging PubMed’s search capabilities and extensive database, researchers can find references to articles that discuss the role of the F5 gene in thrombotic events. This includes articles that examine the genetic changes, mutations, and variants of the F5 gene, as well as their impact on clot formation and other related processes.
PubMed also provides access to articles that discuss the use of genetic testing for identifying thrombophilia-related variants, including F5 Leiden and Prothrombin mutations. These articles provide valuable information for healthcare professionals seeking to diagnose and manage patients with clotting disorders.
Additionally, PubMed hosts a registry of scientific articles on various forms of thrombophilia and related conditions. This registry includes articles written by leading researchers and experts in the field, offering a wealth of knowledge and insights into the genetic factors contributing to clot formation and related disorders.
Overall, PubMed is a valuable resource for accessing scientific articles on genetic factors, including those related to thrombophilia. It offers a wide range of resources, including articles that explore the functional aspects of genes, genetic variants, and their implications in clotting disorders. By leveraging the information available through PubMed, researchers and healthcare professionals can stay up-to-date with the latest scientific findings and advancements in the field of thrombophilia research.
Catalog of Genes and Diseases from OMIM
OMIM (Online Mendelian Inheritance in Man) is a comprehensive catalog of genes and genetic disorders. It provides a valuable resource for researchers, healthcare professionals, and individuals interested in understanding the genetic basis of diseases.
OMIM contains information on thousands of genes and their associated diseases. It includes detailed descriptions of the functional aspects of genes, as well as the mutations and variants that are known to be associated with specific diseases. OMIM also provides a list of references to scientific articles and publications related to each gene and disease.
The catalog includes a wide range of genetic conditions, from rare disorders to more common diseases. For example, OMIM provides information on conditions such as Factor V Leiden thrombophilia, a genetic disorder that increases the risk of developing abnormal blood clots. This condition is caused by a variant of the F5 gene, which codes for an active form of clotting factor V.
OMIM is a valuable resource for healthcare professionals and researchers developing genetic tests for various disorders. The catalog provides information on the available testing methods and the forms of the gene that should be tested for each condition. It also lists additional resources, such as databases and registry resources, that can provide further information on specific genes and diseases.
In the case of Factor V Leiden thrombophilia, OMIM provides information on the testing methods available for the gene variant associated with the condition. It also lists the changes in the gene that are known to be associated with the deficiency in clotting factor V. This information can be useful for healthcare professionals in diagnosing and managing patients with this disorder.
Overall, OMIM is a valuable catalog for researchers, healthcare professionals, and individuals interested in genetic disorders. It provides comprehensive information on genes and diseases, including functional aspects, mutations, testing methods, and additional resources for further research. OMIM is a reliable and trusted source for up-to-date information on genetic conditions and their underlying genetic factors.
Gene and Variant Databases
When studying the F5 gene and its variants, researchers and healthcare professionals rely on gene and variant databases to access and share information. These databases provide a collection of genetic and genomic data related to specific genes and their variants, playing a vital role in understanding the impact of these genetic changes on human health.
Gene and variant databases contain information on various genes and their associated variants. They provide details on gene function, variant frequencies in different populations, and their potential implications for health and disease. These databases serve as valuable resources for scientists, clinicians, and individuals seeking information on specific genetic conditions.
One of the widely used gene and variant databases is the Online Mendelian Inheritance in Man (OMIM) database. OMIM provides a vast collection of curated scientific articles and clinical information on genetic disorders caused by different gene mutations. For example, OMIM includes information on the Leiden variant of the F5 gene, which is associated with an increased risk of developing thrombophilia, a condition characterized by an increased tendency to form blood clots in the bloodstream.
Another important database is PubMed, which contains abstracts and full-text articles from scientific journals. Researchers can search for specific F5 gene variants and explore the latest scientific findings related to their functional impact and association with different diseases.
In addition to these databases, there are other resources available for accessing information on gene variants and related conditions. For example, the Human Gene Mutation Database (HGMD) catalogues disease-causing mutations in various genes, including F5. This database provides information on the functional consequences of different mutations and their association with specific diseases.
Genetic testing laboratories also maintain their internal databases to record and analyze data from individuals undergoing genetic testing. These databases help in identifying novel variants and improving the understanding of their clinical significance.
Testing for F5 gene variants, such as the Leiden variant and the prothrombin variant (F2), is commonly performed in individuals with a personal or family history of thrombotic disorders. These tests can identify individuals who are heterozygous or homozygous for these variants, helping to assess their risk of developing thrombophilia.
The databases and resources mentioned above play a crucial role in advancing scientific knowledge about the F5 gene and its variants. They provide a comprehensive collection of information on the functional changes caused by different variants, their association with thrombotic disorders, and additional factors that may influence their impact on health.
|Gene and Variant Databases
It is important for healthcare providers and individuals to stay up-to-date with the latest research and information available in these databases. This knowledge can help in making informed decisions about genetic testing, understanding the risk factors for thrombophilia, and providing appropriate medical interventions to individuals with F5 gene variants.
Here is a list of references related to the F5 gene and thrombophilia:
Castoldi, E., Simioni, P., Leebeek, F. W., & Tormene, D. (2011). F5 and F2 gene mutations in patients with low tissue factor pathway inhibitor levels. Journal of thrombosis and haemostasis: JTH, 9(7), 1382–1384.
Rosing, J., & Manucci, P. M. (2001). Thrombophilia in family members of patients with factor V Leiden mutation: clinical evaluation. Thrombosis and haemostasis, 86(1), 660–662.
Rosing, J., & Manucci, P. M. (2001). Thrombophilia in the family members of patients with factor V Leiden mutation: timely screening. Thrombosis research, 99(1), 47–49. |
1958 ArticleSpecial levies or fines were imposed in the 16th century as a milder form of persecution alongside the harsher forms of confiscation of property, exile, imprisonment, and execution, and were continued on down into the 18th century after the harsher forms were abolished. In Zürich in 1525 parents who would not have their children baptized were fined up to five pounds, while in 1526 those who baptized others were given fines of 10-15 pounds. In some cases the Anabaptists had to pay the trial costs. Later those who aided Anabaptists were given fines, while the Anabaptists themselves were fined more heavily. In the mandate of the diet of Augsburg of 1577 lenient judges were threatened with money fines. In most of the mandates of the time similar threats were made against all who would favor the Anabaptists. In the Palatinate those who failed to report Anabaptists were threatened with fines, while in Hesse recanting Anabaptists who returned to their faith were required to pay into the treasury for the poor; Anabaptists leaving their homeland had to pay an emigration fee of 10 gulden.
In the 17th and 18th centuries in the Palatinate all Mennonites were subject to special taxation. A head tax was collected for attendance at their services, which had previously been forbidden. Another special tax was imposed as compensation for exemption from military service. Soon a general tax was imposed. Hutterites living in Mannheim in the mid-17th century were freed from guard duty on payment of two to three gulden per family. Although the Concession of 1664 granted by the Electoral Palatine government mentioned the right of exemption from militia duty, in the course of time a special annual tax was imposed toward the cost of maintaining the militia, at first three gulden per family, later six, and finally twelve gulden. The Mennonites protested this tax, at times on the grounds of poverty, at other times on grounds of conscience since they did not want to contribute toward war in any respect. Frequently special fees were levied for registration of deaths, births, and marriages. The Mennonite settlement of Ibersheim was a special case, since here the settlement paid an annual special tax of 50 gulden in place of head taxes.
The collection of the Mennonite head taxes in the Electoral Palatinate, Baden-Durlach, and other territories often led to quarrels between the local and general authorities; usually the general authorities won out, while the Mennonites paid the bill. With every change of rulers the Mennonites had to petition for a renewal of their privileges and usually had to pay extra "donation money in this connection." At times large sums of money were forced out of Mennonite pockets into the princely treasuries by simple pressure or extortion. In Alsace the Mennonites had a long and hard struggle to secure and maintain the privilege of exemption from military service, in the course of which heavy payments of money had to be made. Sometimes families preferred to emigrate to America rather than keep on paying the severe taxes. It was only toward the end of the 18th century, when the valuable contribution of Mennonite farmers to the improvement of agriculture was recognized, that the levies of special "protection money" finally lessened and later ceased. The French Revolution (1789) in general finally put an end to the special taxation of Mennonites in South Germany and Alsace.
In other parts of Germany as well special levies were imposed upon Anabaptists and Mennonites, in effect in return for limited toleration. In East Friesland, for instance, the ruler Rudolf Christian in 1626 introduced the practice of requiring a certain payment per family annually in return for a Schutzbrief (letter of protection), thereby setting a pattern for his successors until East Friesland became Prussian in 1744. But even the Lutheran pastors in this area, particularly in Norden, joined in extorting money by demanding that the Mennonites pay taxes for the upkeep of Lutheran churches and by collecting fees for services, such as funerals, which they did not render. It was only in 1894 that the Mennonites were freed from making contributions to the state church. Similar fees had to be paid by Mennonites in Krefeld to the Reformed Church until they were released from them by the king of Prussia ca. 1720, although in 1721 he required the payment of a fee for the privilege of exemption from military service.
In other place's also in Germany money payments were required in lieu of military service. In 1814 in West Prussia, when the Landsturm (militia) was established, Mennonites were required to pay a certain sum per acre to support the Landsturm. In Prussia Mennonites were everywhere required to pay the same church taxes (for the support of the state church of course) until into the 20th century. In West Prussia the long legal battle necessary to free the Mennonites from the state church taxes was not victoriously concluded until 1922.
War Taxes. Among the Anabaptists in Moravia the differences regarding payment of special war taxes to support the war against the Turks led to a serious division in 1527. In Nikolsburg Hubmaier led a group ("Schwertler") who paid the tax and approved the use of the sword in self-defense against the Turks, while the opposing group ("Stabler") were negative on both points. In 1511 the "Pikards" in Austerlitz declared they could not pay the war taxes, which were against their conscience. The agreement of 26 November 1556 between the Palatine and Hutterite Anabaptists in the region of Kreuznach stipulated the following regarding the payment of war taxes: "But what is blood money, and serves wars or other unrighteous things or undertakings of the government out of itself and not out of divine orders and thereby attacks the conscience, the God-fearing man is not obliged to pay them, because God demands of him that he love his enemy (Matthew 5; Romans 12), and the God-fearing man has promised this to God, and he shall not make any weapon that serves only that end (Isaiah 2; Micah 4), that the God-fearing man may not be a partaker in their wickedness or blood guilt."
The Hutterites consistently refused to pay war taxes and special levies. Peter Riedemann's Rechenschaft of 1545 says on this point: "For war, killing, and bloodshed (where it is demanded especially for that) we give nothing, but not out of wickedness or arbitrariness, but out of the fear of God (1 Timothy 5) that we may not be partakers in strange sins."
In the United States some contemporary pacifists have refused to pay that portion of their federal income taxes which they calculate goes to support the military department and preparation for war, which in the 1950s was about two thirds of the total; few if any Mennonites had taken this course. However, in the time of the Revolutionary War (1776-1783) the Mennonites of the Franconia Conference in Pennsylvania divided over their attitude toward the Revolution, including the payment of special war taxes to the rebellious colonies, the majority being opposed to the payment of such taxes. As a consequence Bishop Christian Funk and his small group of followers were expelled in 1777. -- HSB
1989 UpdateTaxes and war are inextricably linked together. When governments wage war, they eventually levy taxes to pay for them. The taxes may be explicit or indirect. Unfortunately most citizens find themselves implicated in making payments to a military leviathan, regardless of which century or country they live in. Anabaptists almost consistently avoided military service but, with the exception of the Hutterites, expressly urged payment of tax money which made war possible. Can Mennonites still be conscientious objectors when the primary tool of war is money?
For 16th-century Hutterites the issue of "blood money" was clear and unequivocal. They could see no significant difference between fighting a war and supporting it with taxes. Still Anabaptist and Mennonite histories reveal a lot of indecision about the propriety of paying military-related taxes. Much of this is due to the widespread assumption that there was a biblical mandate to pay all taxes much like other financial obligations. These assumptions are being challenged again by scholars and others in the 20th century.
Conscience was alive among Mennonites and Dunkers (Church of the Brethren) on 7 November 1775, when they submitted a joint declaration to the General Assembly of the Commonwealth of Pennsylvania, saving that they were ready at all times to help those in need, but that they were "not at Liberty in Conscience to take up Arms to conquer our Enemies." Drafters of the U.S. constitution (1783-89) recognized the equivalence of conscientious objection to war taxes as well as military service, by the historic peace churches. Recognition of this connection failed to remain in the final draft of the second amendment to the constitution. As a result, states then required payment in lieu of military service.
In Prussia Mennonites were disinclined to pay the military and church taxes based on land ownership. By the 1780s they were apprehensive about the growing military preparations, particularly the annual tax of 5,000 thaler required for the support of military schools. This factor prompted many to relocate in southern Russia and was among the factors leading others to form the Kleine Gemeinde by 1814.
There were no income taxes in the United States until 1862, when they were imposed to pay for the Civil War. Americans were outraged at the imposition of war taxes, which were lifted again in 1872. It is notable that the first proposal of a general federal government income tax was made in 1815, in part to pay for the expenses of the war of 1812-14.
When Mennonites were migrating to Kansas an existing law (1865) required the payment annually of a $30 fine for the privilege of exemption from military service. In response to recommendations from the governor, the legislature repealed the " onerous tax" on March 9, 1874.
Income taxes reappeared in the United States in 1913 just in time to pay for World War I. it was a " class tax" upon the wealthy. Most Mennonites were not affected by it. However, they cooperated with patriotic expectations as best they could, "developing their own programs of voluntary benevolence and relief to provide a moral equivalent of military service and war bond drives." With increased pressure practically everyone "bought a few bonds." Bond drives were a problem in that they were designed not only to finance the war but also to foster patriotism. The bonds did focus the money aspect of war (Margaret Entz in Menn. life [Sept 1975], 4-9), and some Mennonites who refused to buy bonds suffered violence as a result.
World War I proved to be a watershed experience for the Mennonites. Their confrontation with the government's military authorities on the draft was so traumatic that the peace churches turned almost the whole of their attention to military duty requirements and forgot their testimony against taxes for war.
World War II saw the Victory Tax of 1943 established as the first "mass tax" through withholding at the source of income, the employer. Continued uneasiness with governmental pressure to purchase war bonds led to a Mennonite Central Committee effort to substitute "Civilian Bonds." However, U.S. Treasury officials did not clearly commit themselves to use the proceeds strictly for civilian purposes.
The war tax issue remained largely dormant during World War II. The first Mennonite to mention the subject was Austin Regier, a non-registrant, who was sentenced to a federal penitentiary for refusing to comply with the draft. He firmly believed that " the consistent pacifist would refuse war taxes." The idea of organizing war tax resistance in the United States seems to have begun with the Peacemaker Movement which was formed by a heterogeneous group of pacifists in Chicago early in 1948.
An increasingly larger portion of the U.S. federal budget has gone to finance past, present, and future wars in the 1970s and 1980s. Numerous statements have been issued as part of a new wave of concern beginning in 1958. "A Call to Action" was issued by Mennonites and Brethren in Christ meeting in Minneapolis on 21 November 1970. The Way of Peace, a Christian declaration supporting war tax refusal, was adopted by the General Conference Mennonite Church at Fresno, CA, on 19 August 1971. Other statements and resolutions followed in 1974, 1977, 1980, and 1983. The Mennonite Church (MC) issued resolutions in 1979, 1981, 1985, and 1987. A special conference (GCM) was held in Minneapolis in February 1979 specifically to discuss and explore war tax options. The General Board was mandated " to use all legal, legislative, and administrative avenues for achieving a conscientious objector exemption." This followed the Inter-Mennonite/Brethren in Christ War Tax Conference held in Kitchener, ON, 30 October-1 November 1975. In response to the growing war tax concern, the Commission on Home Ministries (GCM) began publishing the God and Caesar newsletter in January of 1975.
As early as 1959 the Society of Friends introduced into the U.S. Congress the "People's Program for Peace" bill. They also circulated a proposal called the "Civilian Income Tax Fund." Other peace tax fund legislation was formulated in 1973. Many Mennonites had refused to pay the telephone excise tax during the Vietnam War.
In March 1974 a Mennonite pastor, Michio Ohno, refused to pay his allotment for Japanese military expenses. Out of the protest, Japanese Fellowship of Reconciliation members, Quakers, Mennonites, and other nonviolent activists worked together to form a group called COMIT (Conscientious Objection to Military Tax). Within 10 years COMIT grew to 400 members, half of whom filed to refuse payment to the military. During the 1980s 22 members brought their appeal to the Tokyo Local Court to challenge the Japanese Government and its tax offices for collecting and spending tax funds unconstitutionally. Hearings were held 23 times during a five-year period. The judges seemed to be avoiding their responsibility. The issue of national defense was so political that the courts refrained from making a decision.
In 1975 Cornelia Lehn (a Canadian citizen) requested her employer, the General Conference Mennonite Church, to refuse to withhold taxes from her salary. Months of intense, agonizing debate followed. In the 1980s similar requests by church employees came before both the Mennonite Church and the Mennonite Central Committee. Both institutions declined to comply with employee requests and continued to withhold taxes and forward the money to the government. Finally, on 1 September 1983, the General Conference Mennonite Church honored such requests. By official conference action "the employees of the Church administration are given the power to be true to the high demands of Christ's Law of Love, in that they can decline to remit withholding taxes from employees that have requested it and therefore open up the possibility to resist for reasons of conscience to pay for the preparation of war." The conference reported these decisions to the federal government's Internal Revenue Service (IRS) but as of 21 September 1987, no action had been taken against the conference. Because of this shift of attention to the corporate level new opportunities for witness have opened up. It is believed that never before in U.S. history have employers refused to withhold taxes for those employees who request this action for reasons of conscience.
In 1978 "Conscience Canada" was organized at the instigation of a few Quakers in Victoria, BC. John R. Dyck of Saskatoon, SK, was among the Mennonites who invested energy in this peace education effort. In 1980, Canadians learned that "freedom of conscience" was to be included in the new constitution. Since 1982 this recognition of conscience has become the basis for a new wave of action to create a legal alternative to paying taxes for war. (Peace tax legislation was introduced in 1983 and 1985. The first nationwide meeting was held in April 1987 at Ottawa).
An increase in open tax resistance is evident despite the Tax Equity and Fiscal Responsibility Act of 1982 which provides that, in the United States, an individual "shall pay a penalty of $500" if he or she files an income-tax return that is incorrect due either to taking a frivolous position or to seeking to delay or impede administration of the tax law. The 1982 tax law is unique in that there is an automatic penalty imposed on taxpayers. Furthermore, the penalty is assessed before opportunity for appeal is given. This is quite discriminatory and suggests that the Internal Revenue Service may be guilty of violating the fourth amendment of the United States Constitution.
Italy has demonstrated that legislation can remove the burden of military taxes. Italians accused, of propagating the Peace Tax Campaign since 1981 have been fully acquitted on "grounds of particular moral and social value," or "the act did not amount to a crime." Peace tax campaigns have emerged, not only in Japan, Canada, Italy, and the United States, but also in Germany, Spain, Switzerland, France, Great Britain, Belgium, New Zealand, Australia, Luxembourg, Austria, Norway, South Africa, Sri Lanka, and The Netherlands. This movement has gathered such momentum that an International Peace Tax Campaign Conference was held for the first time in Tübingen, West Germany, 18-21 September 1986. Marian Claassen Franz of Washington, DC, was among the 100 people who met to share information with War Resisters International in London. In 1987 the IRS approved tax deductible status for the Peace Tax Foundation. The campaign for a Peace Tax Fund in the United States now can focus its limited resources on lobbying, while the foundation expands its outreach and research programs.
Conscientious objectors to war in the United States recognize that the solution to their dilemma of conscience concerning the government's tax demands lies in the United States Congress. "So long as the Internal Revenue Code is deficient in recognizing freedom of conscience as protected by the first and ninth amendments, we shall be journeying through this dungeon of IRS levies, summonses and court trials. The origin and the solution to our problem lie most immediately with Congress, and ultimately with a restored public community of conscience" (Robert Hull, 1987). Today's combat soldier is the taxpayer -- the person who provides the money to produce and deploy the push-button hardware and software for mass annihilation. Individuals shoulder great responsibility for warfare and for peace. At times the most effective way to take responsibility is to refuse to collaborate. The task is progressively to make the coercion of conscience unthinkable by the majority who put their faith in military solutions. -- DDK
An Annotated Bibliography of Mennonite Writings on War and Peace, 1930-1980, ed. Willard Swartley and Cornelius J. Dyck. Scottdale, PA: Herald Press, 1987: 185-200.
Coffin, Linda B., Peter Goldberger, Robert Hull, Jay E. McNeil. Fear God and Honor the Emperor: a Manual for Military Tax Withholdings for Religious Employers. Philadelphia and Elkhart: Friends Committee on War Tax Concerns and New Call to Peacemaking.
Kaufman, Donald D. "War Taxes: Should They Be Paid?" Program Guide 1971. Scottdale, PA: 38-42.
Kraus, Wolfgang, ed. Was Gehört dem Kaiser? Das Problem der Kriegsteuern. Weisenheim am Berg, West Germany: Agape Verlag, 1984, 127 pp. and response by Victor Janzen. "Gebt dem Kaiser was des Kaiser's ist." Der Bote 60, no. 21 (25 May 1983): 21.
Brown, Dale W. and Vernard M. Eller, eds. "Symposium on Tax Resistance." Brethren Life and Thought 19 (1974): 101-24.
MacMaster, Richard K., Samuel L. Horst, and Robert F. Ulle. Conscience in Crisis. Scottdale, PA: Herald Press, 1979: 29-31, 68, 78-80, 113-15, 247-49, 354-62.
Dyck, C. J., ed. Introduction to. Mennonite History. Scottdale, PA: Herald Press, 1967: 52-53, 106, 120-34, 295-98.
Ruth, John L. 'Twas Seeding Time: a Mennonite View of the American Revolution. Scottdale, PA: Herald Press, 1976: 59-88, 162-63.
Ruth, John L. Maintaining the Right Fellowship. Scottdale, PA: Herald Press, 1984: 150-58.
Entz, Margaret. "War Bond Drives and the Kansas Mennonite Response." Mennonite Life 30 (September 1975): 4-9.
Schmidt, Melvin D. "Tax Refusal as Conscientious Objection to War." Mennonite Quarterly Review 43, (1969): 234-46.
Yoder, John Howard. "Why I Don't Pay All My Income Tax." Gospel Herald (22 January 1963): 81, 92, cf. Mennonite 26 February 1963): 132-34, and Sojourners 6, no. 3 (March 1977): 11-12.
Yoder, John Howard. The Christian Witness to the State. Newton, KS 1964: esp. p. 54.
Friesen, Duane K. Christian Peacemaking and International Conflict: a Realist Pacifist Perspective. Scottdale, PA: Herald Press, 1986: 134-40.
Goossen, Richard J. "An Examination of the Legal Justification for War Tax Resistance: the Scope of Freedom of Conscience Under the Canadian Charter of Rights and Freedoms." Conrad Grebel Review 4 (1986): 21-42 with response 4 (1986), 158-60.
Hershberger, Guy F. The Mennonite Church in the Second World War. Scottdale, PA: Mennonite Pub. House, 1951: ch. 11, pp. 138-148, on "Civilian bonds."
Hershberger, Guy F. The Way of the Cross in Human Relations. Scottdale, PA, 1958: 167, 179, 184, 196-97.
Consultation on Civil Responsibility -- a resource packet of 15 papers presented at Elkhart, IN (1-4 June 1978) under the a auspices of the General Conference Mennonite Church.
The Way of Peace-- position statement (General Conference Mennonite Church), August 1971. Newton, KS 1972.
Hull, Robert. 1040 Peace Tax Form. Newton, KS: Commission on Home Ministries, Peace and Justice, GCM, 1987):1-8.
Bohn, E. Stanley. "The Missionary and the War Tax Refuser." Mennonite (11 June 1985): 308.
Charles, Howard H. "The Troublesome Tax Question." Builder (November 1972): 19-20, 30.
Driedger, Leo. "Positions on Tax Dollars for War Purposes," a 6-page compilation circulated by the Board of Christian Service (General Conference Mennonite Church). Newton, KS, 1960.
Adamson, Edith and Marian Franz. "Struggling with Taxes for Military Force." Mennonite (10 March 1987): 104.
Mennonite Weekly Review (31 August 1978): 6 (Japan).
Keidel, Levi. "The Mennonite Credibility Gap." Mennonite (23 December 1975): 730-31.
Neufeld, Elmer and John Unruh, report of MCC Peace Section meeting held in Chicago, IL, 21 January 1960. Akron, PA: MCC: 1-5.
Regehr, Ernie. Making a Killing. Toronto: McClelland and Stewart, 1975.
Souder, Eugene K. "Nonresistant People and the Federal Income Tax." Gospel Herald (27 December 1960): 1103.
Stoltzfus, Ruth C. "War Tax Research Report: Challenging Withholding Law on First Amendment Grounds : a special study prepared for Com mission on Home Ministries (General Conference Mennonite Church)." August 1975: 1-16.
Toews, John E. "Paul's View of the State. Christian Leader (25 April 1978): 5-7.
Yoder, Edward. "Christianity and the State." Mennonite Quarterly Review 11, no. 3 (July 1937): 171-95.
Yoder, Edward. "The Obligation of the Christian to the State and Community -- 'Render to Caesar'." Mennonite Quarterly Review 13 (April 1939): 104-22.
Dick, LaVernae J. "A Noose for the Minister." Mennonite (21 April 1964): 263-65.
Franz, Marian C. "Conscience is Contagious." Mennonite (28 July 1987): 316-19.
Kreider, Robert and Mary S. Sprunger. Sourcebook: Oral History Interviews with World War One Conscientious Objectors. Akron, PA: MCC, 1986: esp. pp. 116, 128ff.
United States Comptroller General. Illegal Tax Protesters Threaten Tax System. Gaithersburg, MD: U.S. General Accounting Office, Document Handling and information Services Facility, 8 July 1981, 70 pp.
Juhnke, James C. A People of Mission: a History of General Conference Mennonite Overseas Mission. Newton, KS: Faith and Life, 1979, 123 (Japan).
Minutes of the 109th [Seventh biennial] General Conference, Brethren in Christ Church, July 5-July 10, 1986. Nappanee, IN, Evangel Press), with minutes of earlier General Conferences cited with corresponding changes in dates: 40-41.
Mennonite Weekly Review (8 January 1987): 5 (on Japanese war tax trial, 1986).
Mennonite Reporter (11 May 1987): 1.
|Author(s)||Harold S. Bender|
|Donald D. Kaufman|
Cite This Article
Bender, Harold S. and Donald D. Kaufman. "Taxes." Global Anabaptist Mennonite Encyclopedia Online. 1989. Web. 27 Oct 2016. http://gameo.org/index.php?title=Taxes&oldid=61230.
Bender, Harold S. and Donald D. Kaufman. (1989). Taxes. Global Anabaptist Mennonite Encyclopedia Online. Retrieved 27 October 2016, from http://gameo.org/index.php?title=Taxes&oldid=61230.
©1996-2016 by the Global Anabaptist Mennonite Encyclopedia Online. All rights reserved. |
The American Revolution was an inevitable conflict. The French and Indian War had major effects on the British and American colonists. This war doubled England's already existent debt. America's little financial and military help outraged many British officals during the war, which largely benefited the Americas. They were also bitter about the Colonists trading goods with enemies of the British. Because of this the British increased authority over the colonies after the war. The British began to tax the colonists to meet England's financial needs. England passed many Acts that were ill conceived and had long-term effects on the relationship between England and the colonies. The crown had never directly taxed the colonists before. This caused problems between the Colonists and the British. A few of the major Acts were the Sugar Act, Currency Act, Stamp Act, and Tea Act. The Sugar Act of 1764 was an effort to try and stop the illegal trade between the Colonists and the French and Spanish. The Currency Act was also passed in 1764. The colonists responded to the Sugar Act and Currency Act by protesting against the use of writs of assistance, or search warrants, which were filled out after the illegal goods were found, violating the Colonists rights. Alleged smugglers would be tried in the Admiralty Courts where the accused had no right to trail by jury and the judge pocketed 1/3 of the fines they imposed. The Stamp Act of 1765 enraged the colonists for this act was a direct attempt by the English to raise money from the colonists without the consent of the colonial assemblies. This tax was different from the rest because the other taxes were to regulate trade. Colonists reacted by riots, boycotts, the forming of the Stamp Act Congress, and Sam Adams organized the Son's of Liberty. The Stamp Act was the first external tax. The colonists felt that they were being taxed without representation.
In 1770 an extraordinary number of British troops were stationed in Boston. The Colonists didn't understand why there were so many troops after the war. This added to the already existent tension. The colonists taunted the Red Coats and on March 5, 1770 the colonists threw snowballs resulting a hasty decision by the Red Coats to fire at the colonists. Five colonists were killed and nine were wounded. This night is known as the Boston Massacre.
The Tea Act of 1773 was a tax on tea but, the British lowered the cost of tea significantly enough that even with the tax, British tea was cheaper than Dutch tea. Also to keep the price down, the British East India Co. got rid of the middleman in the colonies and opened up their own shops. If the colonists bought this tea, they would be accepting the fact that the British could tax without representation. On Dec. 16th 1773 the ships docked at the Boston ports. The Sons of Liberty dressed up as Indians and threw 324 chests of tea into the water. England responded to the Boston Tea Party by the Coercive Act of 1774.
In the fall of 1774 the first continental congress meet in Philadelphia. 55 delegates made a list of grievances and sent it to the King because they did not want to separate from the crown but just work within the system. In the spring of 1775 they realized that working within the system was not going to work. For months common people were training to be prepared to fight on a minutes notice, or the minutemen. General Gage was instructed by the British to get ride of the minutemen. The minutemen were waiting at Lexington for the British soldiers because of the help from Paul Rivere and William Dawus. No one knows who fired first but eight minutemen were killed and ten were wounded. "Shots heard round the world." The British soldiers moved on to Concord. The British burnt the powder supply and continued to Boston were on the way hidden common people continually fired at the Red Coats and resulted in the British losing almost three times as many people as the Americans. This is the beginning of the Revolution, which was not a war, but a rebellion.
Not all of the Colonists actually supported the rebellion. A third of the people were Loyalists meaning loyal to the crown. A third of the people were neutral. A third of the people were patriots. The Colonists didn't even have a unified army. The British Empire had money, an organized army, weapons and a great naval fleet. The Colonists had none of these. The only advantage that the Colonists had at the beginning of the rebellion was that England was across the Atlantic and the battle was in the colonies. Compared to the British who were one of the most powerful empires at this time the Colonists did not seem to have a chance.
Some of the major turning points of the war was the involvement of the French and the Battle of Yorktown. The French did not enter the war until late. The French got involved to spite the British after the French were defeated in the French and Indian war. The French brought the Colonists weapons, men, money and a naval fleet. The Americans now seemed to have a chance. With the help of the French, Washington won the final battle at Yorktown. The French and American troops trapped Cornwallis' army of more than 7,000 men between land and sea. Cornwallis excepted to find the British Fleet but instead found the French Fleet. After some resistance, Cornwallis surrendered.
The final settlement in my opinion was worth all the hardships. The Colonists could govern themselves and could control their own affairs without input from England. England at most times was more concerned with the colonies solving England's problems instead of helping the colonies solve their problems.
The historian's feelings about the Revolution are broken up into four groups, the Neo-Imperialists, the Anti-Progressions, the Neo Whigs, and the New Left. The Neo-Imperialists believe that the British are at fault and that they should have changed some laws and things could have worked out. The Anti-Progressions see the social classes as coming together for the same causes. The America's are a middle class society but all the classes work together for a common good. The Neo Whigs feel that the conflict was between good ideas and bad ideas and the good ideas always win. The New Left looks at how the Revolution affected the minorities and is not interested in any other parts.
In my opinion, the true nature of the conflict between the British and the Colonists was that the British had loosely governed the colonies in the beginning. Because of problems at home in England they did not strictly govern the colonies. The colonies formed their own governments around the loose laws of the British. When the British needed money they decided to bring in extra revenue by taxing the colonists. The colonists did not accept their taxation without representation, which caused the Colonists to seek independence from the crown. Even though in the beginning of the Revolution the Colonists did not seem to have a chance they came back in the end with the help of the French and dedication to their cause. The French entering the war was a major turning point. The final settlement turned out to be worth all of the Colonists hardships for they could finally govern themselves freely and could make their own decisions. Historians throughout the years have had many different views about what the Revolution was really about, but half of them feel that the Colonists came together for a good cause. |
Race is a classification system used to categorize humans into large and distinct populations or groups by heritable phenotypic characteristics, geographic ancestry, physical appearance, ethnicity, and social status. In the early twentieth century the term was often used, in a taxonomic sense, to denote genetically differentiated human populations defined by phenotype. Law enforcement utilizes race in profiling suspects in some countries. These uses of racial categories are frequently criticized for perpetuating an outmoded understanding of human biological variation, and promoting stereotypes. Because in many societies racial groupings correspond closely with patterns of social stratification, for social scientists studying social inequality, race can be a significant variable. As sociological factors, racial categories may in part reflect subjective attributions, self-identities, and social institutions. Accordingly, the racial paradigms employed in different disciplines vary in their emphasis on biological reduction as contrasted with societal construction.
- 1 Complications and various definitions of the concept
- 2 Historical origins of racial classification
- 3 Modern debate
- 3.1 Models of human evolution
- 3.2 "Within" versus "between group variation"
- 3.3 Subspecies
- 3.4 Biological definitions of race
- 3.5 Social constructions
- 3.6 Current views across disciplines
- 3.7 Intelligence
- 4 Political and practical uses
- 5 See also
- 6 References
- 7 Bibliography
- 8 External links
Complications and various definitions of the concept[edit | edit source]
While biologists sometimes use the concept of race to make distinctions among fuzzy sets of traits, others in the scientific community suggest that the idea of race is often used in a naive or simplistic way. Among humans, race has no taxonomic significance; all living humans belong to the same hominid subspecies, Homo sapiens sapiens. Social conceptions and groupings of races vary over time, involving folk taxonomies. Social conceptions and groupings of races vary over time, involving folk taxonomies that define essential types of individuals based on perceived traits. Scientists consider biological essentialism obsolete, and generally discourage racial explanations for collective differentiation in both physical and behavioral traits.
It is demonstrated that race has no biological or genetic basis: gross morphological features which traditionally has been defined as races (e.g. skin color) are determined by non-significant and superficial genetic alleles with no link to any characteristics, such as intelligence, talent, athletic ability, etc. Race has been socially and legally constructed despite the lack of any scientific evidence for dividing humanity into racial baskets with any generalized genetic meaning.
When people define and talk about a particular conception of race, they create a social reality through which social categorization is achieved. In this sense, races are said to be social constructs. These constructs develop within various legal, economic, and sociopolitical contexts, and may be the effect, rather than the cause, of major social situations. While race is understood to be a social construct by many, most scholars agree that race has real material effects in the lives of people through institutionalized practices of preference and discrimination.
Socioeconomic factors, in combination with early but enduring views of race, have led to considerable suffering within disadvantaged racial groups. Racial discrimination often coincides with racist mindsets, whereby the individuals and ideologies of one group come to perceive the members of an outgroup as both racially defined and morally inferior. As a result, racial groups possessing relatively little power often find themselves excluded or oppressed, while hegemonic individuals and institutions are charged with holding racist attitudes. Racism has led to many instances of tragedy, including slavery and genocide. Scholars continue to debate the degrees to which racial categories are biologically warranted and socially constructed, as well as the extent to which the realities of race must be acknowledged in order for society to comprehend and address racism adequately.
In the social sciences theoretical frameworks such as Racial formation theory and Critical race theory investigate implications of race as social construction by exploring how the images, ideas and assumptions of race are expressed in everyday life. A large body of scholarship has traced the relationships between the historical, social production of race in legal and criminal language and their effects on the policing and disproportionate incarceration of certain groups.
Since the second half of the twentieth century the associations of race with the ideologies and theories that grew out of the work of 19th-century anthropologists and physiologists has led to the use of the word race itself becoming problematic. Although still used in general contexts, it is now often replaced by other words which are less emotionally charged, such as populations, people(s), ethnic groups or communities depending on context.
Historical origins of racial classification[edit | edit source]
Groups of humans have probably always identified themselves as distinct from other groups, but such differences have not always been understood to be natural, immutable and global. These features are the distinguishing features of how the concept of race is used today.
The word "race" was originally used to refer to any nations or ethnic groups. Marco Polo in his 13th-century travels, for example, describes the Persian race—the current concept of "race" dates back only to the 17th century.
Race and colonialism[edit | edit source]
The European concept of "race", along with many of the ideas now associated with the term, arose at the time of the scientific revolution, which introduced and privileged the study of natural kinds, and the age of European imperialism and colonization which established political relations between Europeans and peoples with distinct cultural and political traditions. As Europeans encountered people from different parts of the world, they speculated about the physical, social, and cultural differences among various human groups. The rise of the Atlantic slave trade, which gradually displaced an earlier trade in slaves from throughout the world, created a further incentive to categorize human groups in order to justify the subordination of African slaves. Drawing on Classical sources and upon their own internal interactions — for example, the hostility between the English and Irish was a powerful influence on early European thinking about the differences between people — Europeans began to sort themselves and others into groups based on physical appearance, and to attribute to individuals belonging to these groups behaviors and capacities which were claimed to be deeply ingrained. A set of folk beliefs took hold that linked inherited physical differences between groups to inherited intellectual, behavioral, and moral qualities. Similar ideas can be found in other cultures, for example in China, where a concept often translated as "race" was associated with supposed common descent from the Yellow Emperor, and used to stress the unity of ethnic groups in China. Brutal conflicts between ethnic groups have existed throughout history and across the world.
Early taxonomic models[edit | edit source]
The first post-Classical published classification of humans into distinct races seems to be François Bernier's Nouvelle division de la terre par les différents espèces ou races qui l'habitent ("New division of Earth by the different species or races which inhabit it"), published in 1684. In the 18th century, the differences among human groups became a focus of scientific investigation. But the scientific classification of phenotypic variation was frequently coupled with racist ideas about innate predispositions of different groups, always attributing the most desirable features to the White, European race and arranging the other races along a continuum of progressively undesirable attributes. The 1735 classification of Carolus Linnaeus, inventor of zoological taxonomy, divided the human race Homo Sapiens continental varieties of Europaeus, Asiaticus, Americanus and Afer, each associated with a different humour: sanguine, melancholic, choleric and phlegmatic respectively. Homo Sapiens Europeaus was described as active, acute, and adventurous whereas Homo Sapiens Afer was crafty, lazy and careless.
The 1775 treatise "The Natural Varieties of Mankind," by Johann Friedrich Blumenbach proposed five major divisions: the Caucasoid race, Mongoloid race, Ethiopian race (later termed the Negroid race), American Indian race, and Malayan race, but he did not propose any hierarchy among the races. Blumenbach also noted the graded transition in appearances from one group to adjacent groups and suggested that "one variety of mankind does so sensibly pass into the other, that you cannot mark out the limits between them".
From the 17th through the 19th centuries, the merging of folk beliefs about group differences with scientific explanations of those differences produced what one scholar has called an "ideology of race". According to this ideology, races are primordial, natural, enduring and distinct. It was further argued that some groups may be the result of mixture between formerly distinct populations, but that careful study could distinguish the ancestral races that had combined to produce admixed groups. Subsequent influential classifications by Georges Buffon, Petrus Camper and Christoph Meiners all classified "Negros" as inferior to Europeans. In the United States the racial theories of Thomas Jefferson were influential. He saw Africans as inferior to Whites especially in regards to their intellect, and embued with unnatural sexual appetites, but described Native Americans as equals to whites.
Race and polygenism[edit | edit source]
In the last two decades of the 18th century polygenism, the belief that different races had evolved separately in each continent and shared no common ancestor, was advocated in England by historian Edward Long and anatomist Charles White, in Germany by ethnographers Christoph Meiners and Georg Forster, and in France by Julien-Joseph Virey, and prominently in the US by Samuel Morton, Josiah Nott and Louis Agassiz. Polygenism was popular and most widespread in the 19th century, culminating in the creation of the Anthropological Society of London during the American civil war, in opposition to the Abolitionist Ethnological Society.
Modern debate[edit | edit source]
Models of human evolution[edit | edit source]
In a 1995 article, Leonard Lieberman and Fatimah Jackson suggested that any new support for a biological concept of race will likely come from the study of human evolution. They therefore ask what, if any, implications current models of human evolution may have for any biological conception of race.
Today, all humans are classified as belonging to the species Homo sapiens and sub-species Homo sapiens sapiens. However, this is not the first species of homininae: the first species of genus Homo, Homo habilis, are theorized to have evolved in East Africa at least 2 million years ago, and members of this species populated different parts of Africa in a relatively short time. Homo erectus is theorized to have evolved more than 1.8 million years ago, and by 1.5 million years ago had spread throughout Europe and Asia. Virtually all physical anthropologists agree that Homo sapiens evolved out of African Homo erectus ((sensu lato) or Homo ergaster). Most anthropologists believe that Homo sapiens evolved in East Africa and then migrated out of Africa, replacing H. erectus populations throughout Europe and Asia (the Out of Africa model). Recent Human evolutionary genetics (Jobling, Hurles and Tyler-Smith, 2004) support this "Out of Africa" model, however the recent sequencing of the Neanderthal and Denisovan genomes shows some admixture, suggesting interbreeding between early Hominid species. These results also show that 40,000 years ago there co-existed at least three major sub-species that may be considered as "races" (or not, see discussion below): Denisovans, Neanderthals and Cro-magnons. Today, there's only one human species with no sub-species.
"Within" versus "between group variation"[edit | edit source]
The F(ST) or "genetic variation between versus within groups" for human races is approximately 0.15. This is ample to satisfy taxonomic significance. The F(ST) for humans and chimpanzees is 0.18. The attempt to claim F(ST) invalidates the human race concept is known as "Lewontin's Fallacy". However, Witherspoon et al. 2007 concluded that Lewontin's "Fallacy" is only a fallacy if one assumes the populations that individuals can be assigned to are "races". They concluded the ability to assign an individual to a specific population cluster with enough markers considered is perfectly compatible with the fact it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster whilst still being capable of being traced back to specific regions.
Lieberman and Jackson argued that while advocates of both the Multiregional Model and the Out of Africa Model use the word race and make racial assumptions, none define the term. They conclude that students of human evolution would be better off avoiding the word race, and instead describe genetic differences in terms of populations and clinal gradations.
Subspecies[edit | edit source]
In the early 20th century, many anthropologists accepted and taught the belief that biologically distinct races were isomorphic with distinct linguistic, cultural, and social groups, while popularly applying that belief to the field of eugenics, in conjunction with a practice that is now called scientific racism.
Following the Nazi eugenics program, racial essentialism lost scientific credibility. Race anthropologists were pressured to acknowledge findings coming from studies of culture and population genetics, and to revise their conclusions about the sources of phenotypic variation. A significant number of modern anthropologists and biologists in the West came to view race as an invalid genetic or biological designation.
The first to challenge the concept of race on empirical grounds were anthropologists Franz Boas, who demonstrated phenotypic plasticity due to environmental factors, and Ashley Montagu who relied on evidence from genetics. E. O. Wilson then challenged the concept from the perspective of general animal systematics, and further rejected the claim that "races" were equivalent to "subspecies".
According to Jonathan Marks,
By the 1970s, it had become clear that (1) most human differences were cultural; (2) what was not cultural was principally polymorphic – that is to say, found in diverse groups of people at different frequencies; (3) what was not cultural or polymorphic was principally clinal – that is to say, gradually variable over geography; and (4) what was left – the component of human diversity that was not cultural, polymorphic, or clinal – was very small.
A consensus consequently developed among anthropologists and geneticists that race as the previous generation had known it – as largely discrete, geographically distinct, gene pools – did not exist.
In biology the term "race" is used with caution because it can be ambiguous. Generally when it is used it is synonymous with subspecies. For mammals, the taxonomic unit below the species level is usually the subspecies.
Population geneticists have debated whether the concept of population can provide a basis for a new conception of race. In order to do this, a working definition of population must be found. Surprisingly, there is no generally accepted concept of population that biologists use. Although the concept of population is central to ecology, evolutionary biology and conservation biology, most definitions of population rely on qualitative descriptions such as "a group of organisms of the same species occupying a particular space at a particular time" Waples and Gaggiotti identify two broad types of definitions for populations; those that fall into an ecological paradigm, and those that fall into an evolutionary paradigm. Examples of such definitions are:
- Ecological paradigm: A group of individuals of the same species that co-occur in space and time and have an opportunity to interact with each other.
- Evolutionary paradigm: A group of individuals of the same species living in close-enough proximity that any member of the group can potentially mate with any other member.
Morphologically differentiated populations[edit | edit source]
Traditionally, subspecies are seen as geographically isolated and genetically differentiated populations. That is, "the designation 'subspecies' is used to indicate an objective degree of microevolutionary divergence" One objection to this idea is that it does not specify what degree of differentiation is required. Therefore, any population that is somewhat biologically different could be considered a subspecies, even to the level of a local population. As a result, Templeton has argued that it is necessary to impose a threshold on the level of difference that is required for a population to be designated a subspecies.
This effectively means that populations of organisms must have reached a certain measurable level of difference to be recognised as subspecies. Dean Amadon proposed in 1949 that subspecies would be defined according to the seventy-five percent rule which means that 75% of a population must lie outside 99% of the range of other populations for a given defining morphological character or a set of characters. The seventy-five percent rule still has defenders but other scholars argue that it should be replaced with ninety or ninety-five percent rule.
In 1978, Sewall Wright suggested that human populations that have long inhabited separated parts of the world should, in general, be considered different subspecies by the usual criterion that most individuals of such populations can be allocated correctly by inspection. Wright argued that it does not require a trained anthropologist to classify an array of Englishmen, West Africans, and Chinese with 100% accuracy by features, skin color, and type of hair despite so much variability within each of these groups that every individual can easily be distinguished from every other. However, it is customary to use the term race rather than subspecies for the major subdivisions of the human species as well as for minor ones.
On the other hand in practice subspecies are often defined by easily observable physical appearance, but there is not necessarily any evolutionary significance to these observed differences, so this form of classification has become less acceptable to evolutionary biologists. Likewise this typological approach to race is generally regarded as discredited by biologists and anthropologists.
Because of the difficulty in classifying subspecies morphologically, many biologists have found the concept problematic, citing issues such as:
- Visible physical differences do not always correlate with one another, leading to the possibility of different classifications for the same individual organisms.
- Parallel evolution can lead to the existence of the appearance of similarities between groups of organisms that are not part of the same species.
- Isolated populations within previously designated subspecies have been found to exist.
- The criteria for classification may be arbitrary if they ignore gradual variation in traits.
Sesardic argues that when several traits are analyzed at the same time, forensic anthropologists can classify a person's race with an accuracy of close to 100% based on only skeletal remains. This is discussed in a later section.
Ancestrally differentiated populations[edit | edit source]
Cladistics is another method of classification. A clade is a taxonomic group of organisms consisting of a single common ancestor and all the descendants of that ancestor. Every creature produced by sexual reproduction has two immediate lineages, one maternal and one paternal. Whereas Carolus Linnaeus established a taxonomy of living organisms based on anatomical similarities and differences, cladistics seeks to establish a taxonomy—the phylogenetic tree—based on genetic similarities and differences and tracing the process of acquisition of multiple characteristics by single organisms. Some researchers have tried to clarify the idea of race by equating it to the biological idea of the clade. Often mitochondrial DNA or Y chromosome sequences are used to study ancient human migration paths. These single-locus sources of DNA do not recombine and are inherited from a single parent. Individuals from the various continental groups tend to be more similar to one another than to people from other continents, and tracing either mitochondrial DNA or non-recombinant Y-chromosome DNA explains how people in one place may be largely derived from people in some remote location.
Often taxonomists prefer to use phylogenetic analysis to determine whether a population can be considered a subspecies. Phylogenetic analysis relies on the concept of derived characteristics that are not shared between groups, usually applying to populations that are allopatric (geographically separated) and therefore discretely bounded. This would make a subspecies, evolutionarily speaking, a clade – a group with a common evolutionary ancestor population. The smooth gradation of human genetic variation in general rules out any idea that human population groups can be considered monophyletic (cleanly divided) as there appears to always have been considerable gene flow between human populations. Rachel Caspari (2003) have argued that clades are by definition monophyletic groups (a taxon that includes all descendants of a given ancestor) and since no groups currently regarded as races are monophyletic, none of those groups can be clades.
For anthropologists Lieberman and Jackson (1995), however, there are more profound methodological and conceptual problems with using cladistics to support concepts of race. They claim that "the molecular and biochemical proponents of this model explicitly use racial categories in their initial grouping of samples". For example, the large and highly diverse macroethnic groups of East Indians, North Africans, and Europeans are presumptively grouped as Caucasians prior to the analysis of their DNA variation. This is claimed to limit and skew interpretations, obscure other lineage relationships, deemphasize the impact of more immediate clinal environmental factors on genomic diversity, and can cloud our understanding of the true patterns of affinity. They argue that however significant the empirical research, these studies use the term race in conceptually imprecise and careless ways. They suggest that the authors of these studies find support for racial distinctions only because they began by assuming the validity of race. "For empirical reasons we prefer to place emphasis on clinal variation, which recognizes the existence of adaptive human hereditary variation and simultaneously stresses that such variation is not found in packages that can be labeled races."
These scientists do not dispute the importance of cladistic research, only its retention of the word race, when reference to populations and clinal gradations are more than adequate to describe the results.
Clines[edit | edit source]
One crucial innovation in reconceptualizing genotypic and phenotypic variation was anthropologist C. Loring Brace's observation that such variations, insofar as it is affected by natural selection, slow migration, or genetic drift, are distributed along geographic gradations or clines. In part this is due to isolation by distance. This point called attention to a problem common to phenotype-based descriptions of races (for example, those based on hair texture and skin color): they ignore a host of other similarities and differences (for example, blood type) that do not correlate highly with the markers for race. Thus, anthropologist Frank Livingstone's conclusion, that since clines cross racial boundaries, "there are no races, only clines".
In a response to Livingstone, Theodore Dobzhansky argued that when talking about race one must be attentive to how the term is being used: "I agree with Dr. Livingstone that if races have to be 'discrete units,' then there are no races, and if 'race' is used as an 'explanation' of the human variability, rather than vice versa, then the explanation is invalid." He further argued that one could use the term race if one distinguished between "race differences" and "the race concept." The former refers to any distinction in gene frequencies between populations; the latter is "a matter of judgment." He further observed that even when there is clinal variation, "Race differences are objectively ascertainable biological phenomena… but it does not follow that racially distinct populations must be given racial (or subspecific) labels." In short, Livingstone and Dobzhansky agree that there are genetic differences among human beings; they also agree that the use of the race concept to classify people, and how the race concept is used, is a matter of social convention. They differ on whether the race concept remains a meaningful and useful social convention.
In 1964, biologists Paul Ehrlich and Holm pointed out cases where two or more clines are distributed discordantly—for example, melanin is distributed in a decreasing pattern from the equator north and south; frequencies for the haplotype for beta-S hemoglobin, on the other hand, radiate out of specific geographical points in Africa. As anthropologists Leonard Lieberman and Fatimah Linda Jackson observed, "Discordant patterns of heterogeneity falsify any description of a population as if it were genotypically or even phenotypically homogeneous".
Patterns such as those seen in human physical and genetic variation as described above, have led to the consequence that the number and geographic location of any described races is highly dependent on the importance attributed to, and quantity of, the traits considered. Scientists discovered a skin-lighting mutation that partially accounts for the appearance of Light skin in humans (people who migrated out of Africa northward into what is now Europe) which they estimate occurred 20,000 to 50,000 years ago. The East Asians owe their relatively light skin to different mutations. On the other hand, the greater the number of traits (or alleles) considered, the more subdivisions of humanity are detected, since traits and gene frequencies do not always correspond to the same geographical location. Or as Ossorio & Duster ( 2005) put it:
Anthropologists long ago discovered that humans' physical traits vary gradually, with groups that are close geographic neighbors being more similar than groups that are geographically separated. This pattern of variation, known as clinal variation, is also observed for many alleles that vary from one human group to another. Another observation is that traits or alleles that vary from one group to another do not vary at the same rate. This pattern is referred to as nonconcordant variation. Because the variation of physical traits is clinal and nonconcordant, anthropologists of the late 19th and early 20th centuries discovered that the more traits and the more human groups they measured, the fewer discrete differences they observed among races and the more categories they had to create to classify human beings. The number of races observed expanded to the 30s and 50s, and eventually anthropologists concluded that there were no discrete races. Twentieth and 21st century biomedical researchers have discovered this same feature when evaluating human variation at the level of alleles and allele frequencies. Nature has not created four or five distinct, nonoverlapping genetic groups of people.
More recent genetic studies indicate that skin color may change radically over as few as 100 generations, or about 2,500 years, given the influence of the environment.
Serre & Pääbo ( 2004) argued for smooth, clinal genetic variation in ancestral populations even in regions previously considered racially homogeneous, with the apparent gaps turning out to be artifacts of sampling techniques. Rosenberg et al. (2005) disputed this and argued that using more data showed that there were small discontinuities in the smooth genetic variation for ancestral populations at the location of geographic barriers such as the Sahara, the Oceans, and the Himalayas.
Genetically differentiated populations[edit | edit source]
Another way to look at differences between populations is to measure genetic differences rather than physical differences between groups. Mid-20th century anthropologist William C. Boyd defined race as: "A population which differs significantly from other populations in regard to the frequency of one or more of the genes it possesses. It is an arbitrary matter which, and how many, gene loci we choose to consider as a significant 'constellation'". Leonard Lieberman and Rodney Kirk have pointed out that "the paramount weakness of this statement is that if one gene can distinguish races then the number of races is as numerous as the number of human couples reproducing." Moreover, anthropologist Stephen Molnar has suggested that the discordance of clines inevitably results in a multiplication of races that renders the concept itself useless. The Human Genome Project states "People who have lived in the same geographic region for many generations may have some alleles in common, but no allele will be found in all members of one population and in no members of any other."
Fixation index[edit | edit source]
Population geneticist Sewall Wright developed one way of measuring genetic differences between populations known as the Fixation index, which is often abbreviated to FST. This statistic is often used in taxonomy to compare differences between any two given populations by measuring the genetic differences among and between populations for individual genes, or for many genes simultaneously. It is often stated that the fixation index for humans is about 0.15. This translates to an estimated 85% of the variation measured in the overall human population is found within individuals of the same population, and about 15% of the variation occurs between populations. These estimates imply that any two individuals from different populations are almost as likely to be more similar to each other than either is to a member of their own group. Richard Lewontin, who affirmed these ratios, thus concluded neither "race" nor "subspecies" were appropriate or useful ways to describe human populations. Others also noting that group variation was relatively low compared to the variation observed in other mammalian species, agreed the evidence confirmed the absence of natural subdivision of the human population.
Wright himself believed that values >0.25 represent very great genetic variation and that an FST of 0.15–0.25 represented great variation. It should however be noted that about 5% of human variation occurs between populations within continents, therefore FST values between continental groups of humans (or races) of as low as 0.1 (or possibly lower) have been found in some studies, suggesting more moderate levels of genetic variation. Graves (1996) has countered that FST should not be used as a marker of subspecies status, as the statistic is used to measure the degree of differentiation between populations, although see also Wright (1978).
In an ongoing debate, some geneticists argue that race is neither a meaningful concept nor a useful heuristic device, and even that genetic differences among groups are biologically meaningless, because more genetic variation exists within such races than among them, and that racial traits overlap without discrete boundaries.
Jeffrey Long and Rick Kittles give a long critique of the application of FST to human populations in their 2003 paper "Human Genetic Diversity and the Nonexistence of Biological Races". They find that the figure of 85% is misleading because it implies that all human populations contain on average 85% of all genetic diversity. They claim that this does not correctly reflect human population history, because it treats all human groups as independent. A more realistic portrayal of the way human groups are related is to understand that some human groups are parental to other groups and that these groups represent paraphyletic groups to their descent groups. For example, under the recent African origin theory the human population in Africa is paraphyletic to all other human groups because it represents the ancestral group from which all non-African populations derive, but more than that, non-African groups only derive from a small non-representative sample of this African population. This means that all non-African groups are more closely related to each other and to some African groups (probably east Africans) than they are to others, and further that the migration out of Africa represented a genetic bottleneck, with much of the diversity that existed in Africa not being carried out of Africa by the emigrating groups. This view produces a version of human population movements that do not result in all human populations being independent; but rather, produces a series of dilutions of diversity the further from Africa any population lives, each founding event representing a genetic subset of its parental population. Long and Kittles find that rather than 85% of human genetic diversity existing in all human populations, about 100% of human diversity exists in a single African population, whereas only about 70% of human genetic diversity exists in a population derived from New Guinea. Long and Kittles argued that this still produces a global human population that is genetically homogeneous compared to other mammalian populations.
Cluster analysis[edit | edit source]
In his 2003 paper, "Human Genetic Diversity: Lewontin's Fallacy", A. W. F. Edwards argued that rather than using a locus-by-locus analysis of variation to derive taxonomy, it is possible to construct a human classification system based on characteristic genetic patterns, or clusters inferred from multilocus genetic data. Geographically based human studies since have shown that such genetic clusters can be derived from analyzing of a large number of loci which can assort individuals sampled into groups analogous to traditional continental racial groups. Joanna Mountain and Neil Risch cautioned that while genetic clusters may one day be shown to correspond to phenotypic variations between groups, such assumptions were premature as the relationship between genes and complex traits remains poorly understood. However, Risch denied such limitations render the analysis useless: "Perhaps just using someone's actual birth year is not a very good way of measuring age. Does that mean we should throw it out? ... Any category you come up with is going to be imperfect, but that doesn't preclude you from using it or the fact that it has utility."
Early human genetic cluster analysis studies were conducted with samples taken from ancestral population groups living at extreme geographic distances from each other. It was thought that such large geographic distances would maximize the genetic variation between the groups sampled in the analysis and thus maximize the probability of finding cluster patterns unique to each group. In light of the historically recent acceleration of human migration (and correspondingly, human gene flow) on a global scale, further studies were conducted to judge the degree to which genetic cluster analysis can pattern ancestrally identified groups as well as geographically separated groups. One such study looked at a large multiethnic population in the United States, and "detected only modest genetic differentiation between different current geographic locales within each race/ethnicity group. Thus, ancient geographic ancestry, which is highly correlated with self-identified race/ethnicity—as opposed to current residence—is the major determinant of genetic structure in the U.S. population."(Tang et al. (2005))
Witherspoon et al. (2007) have argued that even when individuals can be reliably assigned to specific population groups, it may still be possible for two randomly chosen individuals from different populations/clusters to be more similar to each other than to a randomly chosen member of their own cluster. They found that many thousands of genetic markers had to be used in order for the answer to the question "How often is a pair of individuals from one population genetically more dissimilar than two individuals chosen from two different populations?" to be "never". This assumed three population groups separated by large geographic ranges (European, African and East Asian). The entire world population is much more complex and studying an increasing number of groups would require an increasing number of markers for the same answer. The authors conclude that "caution should be used when using geographic or genetic ancestry to make inferences about individual phenotypes." Witherspoon et al. concluded that, "The fact that, given enough genetic data, individuals can be correctly assigned to their populations of origin is compatible with the observation that most human genetic variation is found within populations, not between them. It is also compatible with our finding that, even when the most distinct populations are considered and hundreds of loci are used, individuals are frequently more similar to members of other populations than to members of their own population."
Anthropologists such as C. Loring Brace and philosopher Jonathan Kaplan and geneticist Joseph Graves, have argued that while there it is certainly possible to find biological and genetic variation that corresponds roughly to the groupings normally defined as "continental races", this is true for almost all geographically distinct populations. The cluster structure of the genetic data is therefore dependent on the initial hypotheses of the researcher and the populations sampled. When one samples continental groups the clusters become continental, if one had chosen other sampling patterns the clustering would be different. Weiss and Fullerton have noted that if one sampled only Icelanders, Mayans and Maoris, three distinct clusters would form and all other populations could be described as being clinally composed of admixtures of Maori, Icelandic and Mayan genetic materials. Kaplan therefore argues that seen in this way both Lewontin and Edwards are right in their arguments. He concludes that while racial groups are characterized by different allele frequencies, this does not mean that racial classification is a natural taxonomy of the human species, because multiple other genetic patterns can be found in human populations that crosscut racial distinctions. In this view racial groupings are social constructions that also have a biological reality which is largely an artefact of how the category has been constructed.
Biological definitions of race[edit | edit source]
|Essentialist||Hooton (1926)||"A great division of mankind, characterized as a group by the sharing of a certain combination of features, which have been derived from their common descent, and constitute a vague physical background, usually more or less obscured by individual variations, and realized best in a composite picture."|
|Taxonomic||Mayr (1969)||"A subspecies is an aggregate of phenotypically similar populations of a species, inhabiting a geographic subdivision of the range of a species, and differing taxonomically from other populations of the species."|
|Population||Dobzhansky (1970)||"Races are genetically distinct Mendelian populations. They are neither individuals nor particular genotypes, they consist of individuals who differ genetically among themselves."|
|Lineage||Templeton (1998)||"A subspecies (race) is a distinct evolutionary lineage within a species. This definition requires that a subspecies be genetically differentiated due to barriers to genetic exchange that have persisted for long periods of time; that is, the subspecies must have historical continuity in addition to current genetic differentiation."|
Social constructions[edit | edit source]
As anthropologists and other evolutionary scientists have shifted away from the language of race to the term population to talk about genetic differences, historians, cultural anthropologists and other social scientists re-conceptualized the term "race" as a cultural category or social construct—a particular way that some people talk about themselves and others.
Many social scientists have replaced the word race with the word "ethnicity" to refer to self-identifying groups based on beliefs concerning shared culture, ancestry and history. Alongside empirical and conceptual problems with "race," following the Second World War, evolutionary and social scientists were acutely aware of how beliefs about race had been used to justify discrimination, apartheid, slavery, and genocide. This questioning gained momentum in the 1960s during the U.S. civil rights movement and the emergence of numerous anti-colonial movements worldwide. They thus came to believe that race itself is a social construct, a concept that was believed to correspond to an objective reality but which was believed in because of its social functions.
Craig Venter and Francis Collins of the National Institute of Health jointly made the announcement of the mapping of the human genome in 2000. Upon examining the data from the genome mapping, Venter realized that although the genetic variation within the human species is on the order of 1–3% (instead of the previously assumed 1%), the types of variations do not support notion of genetically defined races. Venter said, "Race is a social concept. It's not a scientific one. There are no bright lines (that would stand out), if we could compare all the sequenced genomes of everyone on the planet." "When we try to apply science to try to sort out these social differences, it all falls apart."
Stephan Palmié asserted that race "is not a thing but a social relation"; or, in the words of Katya Gibel Mevorach, "a metonym," "a human invention whose criteria for differentiation are neither universal nor fixed but have always been used to manage difference." As such, the use of the term "race" itself must be analyzed. Moreover, they argue that biology will not explain why or how people use the idea of race: History and social relationships will.
Imani Perry, a professor in the Center for African American Studies at Princeton University, has made significant contributions to how we define race in America today. Perry’s work focuses on how race is experienced. Perry tells us that race, "is produced by social arrangements and political decision making." Perry explains race more in stating, "race is something that happens, rather than something that is. It is dynamic, but it holds no objective truth."
The theory that race is merely a social construct has been challenged by the findings of researchers at the Stanford University School of Medicine, published in the American Journal of Human Genetics as "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". One of the researchers, Neil Risch, noted: "we looked at the correlation between genetic structure [based on microsatellite markers] versus self-description, we found 99.9% concordance between the two. We actually had a higher discordance rate between self-reported sex and markers on the X chromosome! So you could argue that sex is also a problematic category. And there are differences between sex and gender; self-identification may not be correlated with biology perfectly. And there is sexism."
Race and Ethnicity[edit | edit source]
The distinction between race and ethnicity is considered highly problematic. Ethnicity is often assumed to be the cultural identity of a group from a nation state, while race is assumed to be biological and/or cultural essentialization of a group hierarchy of superiority/inferiority related to their biological constitution. It is assumed that, based on power relations, there exist 'racialized ethnicities' and 'ethnicized races'. Ramán Grosfoguel (University of California, Berkeley) notes that 'racial/ethnic identity' is one concept and that concepts of race and ethnicity cannot be used as separate and autonomous categories.
Brazil[edit | edit source]
Compared to 19th century United States, 20th century Brazil was characterized by a perceived relative absence of sharply defined racial groups. According to anthropologist Marvin Harris, this pattern reflects a different history and different social relations. Basically, race in Brazil was "biologized," but in a way that recognized the difference between ancestry (which determines genotype) and phenotypic differences. There, racial identity was not governed by rigid descent rule, such as the one-drop rule, as it was in the United States. A Brazilian child was never automatically identified with the racial type of one or both parents, nor were there only a very limited number of categories to choose from.
Over a dozen racial categories would be recognized in conformity with all the possible combinations of hair color, hair texture, eye color, and skin color. These types grade into each other like the colors of the spectrum, and no one category stands significantly isolated from the rest. That is, race referred preferentially to appearance, not heredity. The complexity of racial classifications in Brazil reflects the extent of miscegenation in Brazilian society, a society that remains highly, but not strictly, stratified along color lines. Henceforth, the Brazilian narrative of a perfect "post-racist" country, must be met with caution, as sociologist Gilberto Freyre demonstrated in 1933 in Casa Grande e Senzala.
European Union[edit | edit source]
According to European Union Council Directive, Template:Vquote The European Union uses the terms racial origin and ethnic origin synonymously in its documents and according to it „the use of the term “racial origin” in this directive does not imply an acceptance of such [racial] theories”. Haney López warns that using ‘race’ as a category within the law tends to legitimize its existence in the popular imagination. In the diverse geographic context of Europe, ethnicity and ethnic origin are arguably more resonant and are less encumbered by the ideological baggage associated with ‘race’. In European context, historical resonance of ‘race’ underscores its problematic nature. In some states, it is strongly associated with laws promulgated by the Nazi and Fascist governments in Europe during the 1930s and 1940s. Indeed, in 1996, the European Parliament adopted a resolution stating that “the term should therefore be avoided in all official texts”.
The concept of racial origin is inherently problematic, being grounded in the scientifically false notion that human beings can be separated into biologically distinct ‘races’. Since all human beings belong to the same species, the ECRI (European Commission against Racism and Intolerance) rejects theories based on the existence of different ‘races’. However, in its Recommendation ECRI uses this term in order to ensure that those persons who are generally and erroneously perceived as belonging to ‘another race’ are not excluded from the protection provided for by the legislation. The law claims to reject the existence of ‘race’, yet penalize situations where someone is treated less favourably on this ground.
France[edit | edit source]
Since the end of the Second World War, France has become an ethnically diverse country. Today, approximately five percent of the French population is non-European and non-white. This does not approach the number of non-white citizens in the United States (roughly 15-25%, depending on how Latinos are classified). Nevertheless, it amounts to at least three million people, and has forced the issues of ethnic diversity onto the French policy agenda. France has developed an approach to dealing with ethnic problems that stands in contrast to that of many advanced, industrialized countries. Unlike the United States, Britain, or even the Netherlands, France maintains a "color-blind" model of public policy. This means that it targets virtually no policies directly at racial or ethnic groups. Instead, it uses geographic or class criteria to address issues of social inequalities. It has, however, developed an extensive anti-racist policy repertoire since the early 1970s. Until recently, French policies focused primarily on issues of hate speech—going much further than their American counterparts-and relatively less on issues of discrimination in jobs, housing, and in provision of goods and services.
United States[edit | edit source]
The immigrants to the Americas came from every region of Europe, Africa, and Asia. They mixed among themselves and with the indigenous inhabitants of the continent. In the United States most people who self-identify as African–American have some European ancestors, while many people who identify as European American have some African or Amerindian ancestors.
Since the early history of the United States, Amerindians, African–Americans, and European Americans have been classified as belonging to different races. Efforts to track mixing between groups led to a proliferation of categories, such as mulatto and octoroon. The criteria for membership in these races diverged in the late 19th century. During Reconstruction, increasing numbers of Americans began to consider anyone with "one drop" of known "Black blood" to be Black, regardless of appearance.3 By the early 20th century, this notion was made statutory in many states.4 Amerindians continue to be defined by a certain percentage of "Indian blood" (called blood quantum). To be White one had to have perceived "pure" White ancestry. The one-drop rule or hypodescent rule refers to the convention of defining a person as racially black if he or she has any known African ancestry. This rule meant that those that were mixed race but with some discernable African ancestry were defined as black. The one-drop rule is specific to not only those with African ancestry but to the United States, making it a particularly African-American experience.
The term "Hispanic" as an ethnonym emerged in the 20th century with the rise of migration of laborers from American Spanish-speaking countries to the United States. Today, the word "Latino" is often used as a synonym for "Hispanic". The definitions of both terms are non-race specific, and include people who consider themselves to be of distinct races (Black, White, Amerindian, Asian, and mixed groups). However, there is a common misconception in the US that Hispanic/Latino is a race or sometimes even that national origins such as Mexican, Cuban, Colombian, Salvadoran, etc. are races. In contrast to "Latino" or "Hispanic", "Anglo" refers to non-Hispanic White Americans or non-Hispanic European Americans, most of whom speak the English language but are not necessarily of English descent.
Current views across disciplines[edit | edit source]
In Poland, the race concept was rejected by 25 percent of anthropologists in 2001, although: "Unlike the U.S. anthropologists, Polish anthropologists tend to regard race as a term without taxonomic value, often as a substitute for population."
Liberman et al. in a 2004 study claimed to "present the currently available information on the status of the concept in the United States, the Spanish language areas, Poland, Europe, Russia, and China. Rejection of race ranges from high to low with the highest rejection occurring among anthropologists in the United States (and Canada). Rejection of race is moderate in Europe, sizeable in Poland and Cuba, and lowest in Russia and China." Methods used in the studies reported included questionnaires and content analysis.
Kaszycka et al. (2009) in 2002–2003 surveyed European anthropologists' opinions toward the biological race concept. Three factors, country of academic education, discipline, and age, were found to be significant in differentiating the replies. Those educated in Western Europe, physical anthropologists, and middle-aged persons rejected race more frequently than those educated in Eastern Europe, people in other branches of science, and those from both younger and older generations."The survey shows that the views of anthropologists on race are sociopolitically (ideologically) influenced and highly dependent on education."
United States views[edit | edit source]
One result of debates over the meaning and validity of the concept of race is that the current literature across different disciplines regarding human variation lacks consensus, though within some fields, such as some branches of anthropology, there is strong consensus. Some studies use the word race in its early essentialist taxonomic sense. Many others still use the term race, but use it to mean a population, clade, or haplogroup. Others eschew the concept of race altogether, and use the concept of population as a less problematic unit of analysis.
U.S. anthropology[edit | edit source]
The concept of biological race has declined significantly in frequency of use in physical anthropology in the United States during the 20th century. A majority of physical anthropologists in the United States have rejected the concept of biological races. Since 1932, an increasing number of college textbooks introducing physical anthropology have rejected race as a valid concept: from 1932 to 1976, only seven out of thirty-two rejected race; from 1975 to 1984, thirteen out of thirty-three rejected race; from 1985 to 1993, thirteen out of nineteen rejected race. According to one academic journal entry, where 78 percent of the articles in the 1931 Journal of Physical Anthropology employed these or nearly synonymous terms reflecting a bio-race paradigm, only 36 percent did so in 1965, and just 28 percent did in 1996.
The "Statement on 'Race'" (1998) composed by a select committee of anthropologists and issued by the executive board of the American Anthropological Association as a statement they "believe [...] represents generally the contemporary thinking and scholarly positions of a majority of anthropologists", declares:
"In the United States both scholars and the general public have been conditioned to viewing human races as natural and separate divisions within the human species based on visible physical differences. With the vast expansion of scientific knowledge in this century, however, it has become clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation, about 94%, lies within so-called racial groups. Conventional geographic "racial" groupings differ from one another only in about 6% of their genes. This means that there is greater variation within "racial" groups than between them. In neighboring populations there is much overlapping of genes and their phenotypic (physical) expressions. Throughout history whenever different groups have come into contact, they have interbred. The continued sharing of genetic materials has maintained all of humankind as a single species."
"With the vast expansion of scientific knowledge in this century, ... it has become clear that human populations are not unambiguous, clearly demarcated, biologically distinct groups. [...] Given what we know about the capacity of normal humans to achieve and function within any culture, we conclude that present-day inequalities between so-called "racial" groups are not consequences of their biological inheritance but products of historical and contemporary social, economic, educational, and political circumstances."
A survey, taken in 1985 (Lieberman et al. 1992), asked 1,200 American scientists how many disagree with the following proposition: "There are biological races in the species Homo sapiens." The responses were for anthropologists:
The figure for physical anthropologists at PhD granting departments was slightly higher, rising from 41% to 42%, with 50% agreeing. This survey, however, did not specify any particular definition of race (although it did clearly specify biological race within the species Homo sapiens); it is difficult to say whether those who supported the statement thought of race in taxonomic or population terms.
The same survey, taken in 1999, showed the following changing results for anthropologists:
A line of research conducted by Cartmill (1998), however, seemed to limit the scope of Lieberman’s finding that there was "a significant degree of change in the status of the race concept". Goran Štrkalj has argued that this may be because Lieberman and collaborators had looked at all the members of the American Anthropological Association irrespective of their field of research interest, while Cartmill had looked specifically at biological anthropologists interested in human variation.
According to the 2000 edition of a popular physical anthropology textbook, forensic anthropologists are overwhelmingly in support of the idea of the basic biological reality of human races. Forensic physical anthropologist and professor George W. Gill has said that the idea that race is only skin deep "is simply not true, as any experienced forensic anthropologist will affirm" and "Many morphological features tend to follow geographic boundaries coinciding often with climatic zones. This is not surprising since the selective forces of climate are probably the primary forces of nature that have shaped human races with regard not only to skin color and hair form but also the underlying bony structures of the nose, cheekbones, etc. (For example, more prominent noses humidify air better.)" While he can see good arguments for both sides, the complete denial of the opposing evidence "seems to stem largely from socio-political motivation and not science at all". He also states that many biological anthropologists see races as real yet "not one introductory textbook of physical anthropology even presents that perspective as a possibility. In a case as flagrant as this, we are not dealing with science but rather with blatant, politically motivated censorship".
In partial response to Gill's statement, Professor of Biological Anthropology C. Loring Brace argues that the reason laymen and biological anthropologists can determine the geographic ancestry of an individual can be explained by the fact that biological characteristics are clinally distributed across the planet, and that does not translate into the concept of race. He states that "Well, you may ask, why can't we call those regional patterns "races"? In fact, we can and do, but it does not make them coherent biological entities. "Races" defined in such a way are products of our perceptions. ... We realize that in the extremes of our transit—Moscow to Nairobi, perhaps—there is a major but gradual change in skin color from what we euphemistically call white to black, and that this is related to the latitudinal difference in the intensity of the ultraviolet component of sunlight. What we do not see, however, is the myriad other traits that are distributed in a fashion quite unrelated to the intensity of ultraviolet radiation. Where skin color is concerned, all the northern populations of the Old World are lighter than the long-term inhabitants near the equator. Although Europeans and Chinese are obviously different, in skin color they are closer to each other than either is to equatorial Africans. But if we test the distribution of the widely known ABO blood-group system, then Europeans and Africans are closer to each other than either is to Chinese." "Race" is still sometimes used within forensic anthropology (when analyzing skeletal remains), biomedical research, and race-based medicine. Brace has criticized this, the practice of forensic anthropologists for using the controversial concept "race" out of convention when they in fact should be talking about regional ancestry. He argues that while a forensic anthropologists can determine that a skeletal remain comes from a person with ancestors in a specific region of Africa, categorizing that skeletal as being "black" is a socially constructed category that is only meaningful in the particular context of the United States, and which is not itself scientifically valid.
Other fields[edit | edit source]
In the 1985 poll (Lieberman et al. 1992) the results for biologists and developmental psychologists were:
In February 2001, the editors of Archives of Pediatrics and Adolescent Medicine asked "authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so." The editors also stated that "analysis by race and ethnicity has become an analytical knee-jerk reflex." Nature Genetics now ask authors to "explain why they make use of particular ethnic groups or populations, and how classification was achieved."
Liberman et al. (1992) examined 77 college textbooks in biology and 69 in physical anthropology published between 1932 and 1989. Physical anthropology texts argued that biological races exist until the 1970s, when they began to argue that races do not exist. In contrast, biology textbooks never underwent such a reversal but instead dropped their discussion of race altogether. Morning (2008) looked at high school biology textbooks during the 1952-2002 period and initially found a similar pattern with only 35% directly discussing race in the 1983–92 period from initially 92% doing so. However, this has increased somewhat after this to 43%. More indirect and brief discussions of race in the context of medical disorders have increased from none to 93% of textbooks. In general, the material on race has moved from surface traits to genetics and evolutionary history. The study argues that the textbooks’ fundamental message about the existence of races has changed little.
Gissis (2008) examined several important American and British journals in genetics, epidemiology and medicine for their content during the 1946-2003 period. He wrote that "Based upon my findings I argue that the category of race only seemingly disappeared from scientific discourse after World War II and has had a fluctuating yet continuous use during the time span from 1946 to 2003, and has even become more pronounced from the early 1970s on".
A 1994 examination of 32 English sport/exercise science textbooks found that 7 (21.9%) claimed that there are biophysical differences due to race that might explain differences in sports performance, 24 (75%) did not mention nor refute the concept, and 1 (3.12%) expressed caution with the idea.
33 health services researchers from differing geographic regions were interviewed in a 2008 study. The researchers recognized the problems with racial and ethnic variables but the majority still believed these variables were necessary and useful.
A 2010 examination of 18 widely used English anatomy textbooks found that every one relied on the race concept. The study gives examples of how the textbooks claim that anatomical features vary between races.
Intelligence[edit | edit source]
Researchers have reported differences in the average IQ test scores of various ethnic groups. The interpretation, causes, accuracy and reliability of these differences are highly controversial. Some psychologists such as Arthur Jensen, and Richard Lynn, have argued that such differences are at least partially genetic. Richard Herrnstein and Charles Murray argue that "intelligence is less than completely heritable." Many other researchers both in Psychology, Sociology and Anthropology, for example Thomas Sowell, David F. Marks, Jonathan Marks, Richard Nisbett, argue that the differences largely owe to social and economic inequalities. Still others such as Stephen Jay Gould and Robert Sternberg have argued that categories such as "race" and "intelligence" are both "folk" constructs rather than well defined scientific concepts, and that since the definitions are largely fluid and susceptible to different cultural constructions this in turn renders attempts to explain variation of one in terms of the other scientifically invalid.
Political and practical uses[edit | edit source]
Biomedicine[edit | edit source]
In the United States, policy makers use racially categorized data to identify and address health disparities between racial or ethnic groups. In clinical settings, race has long been considered in the diagnosis and treatment of medical conditions, because some medical conditions are more prevalent in certain racial or ethnic groups than in others. Recent interest in race-based medicine, or race-targeted pharmacogenomics, has been fueled by the proliferation of human genetic data which followed the decoding of the human genome in the early 2000s. There is an active debate among biomedical researchers about the meaning and importance of race in their research. Some researchers strongly support the continued use of racial categorizations in biomedical research and clinical practice. They argue that race may correlate, albeit imperfectly, with the presence of specific genetic variants associated with disease: Insofar as race "provides a sufficiently precise proxy for human genetic variation", the concept may be medically viable. In addition, knowledge of a person's race may provide a cost-effective way to assess susceptibility to genetically influenced medical conditions.
Detractors of race-based medicine acknowledge that race is sometimes useful in clinical medicine, but encourage minimizing its use. They suggest that medical practices should maintain their focus on the individual rather than an individual's membership to any group. They argue that overemphasizing genetic contributions to health disparities carries various risks such as reinforcing stereotypes, promoting racism or ignoring the contribution of non-genetic factors to health disparities. Some researchers in the field have been accused "of using race as a placeholder during the 'meantime' of pharmacogenomic development". Conversely, it is argued that in the early stages of the field's development, researchers must consider race-related factors if they are to ascertain the clinical potentials of ongoing scholarship.
Law enforcement[edit | edit source]
In an attempt to provide general descriptions that may facilitate the job of law enforcement officers seeking to apprehend suspects, the United States FBI employs the term "race" to summarize the general appearance (skin color, hair texture, eye shape, and other such easily noticed characteristics) of individuals whom they are attempting to apprehend. From the perspective of law enforcement officers, it is generally more important to arrive at a description that will readily suggest the general appearance of an individual than to make a scientifically valid categorization by DNA or other such means. Thus, in addition to assigning a wanted individual to a racial category, such a description will include: height, weight, eye color, scars and other distinguishing characteristics.
British Police use a classification based in the ethnic background of British society: W1 (White-British), W2 (White-Irish), W9 (Any other white background); M1 (White and black Caribbean), M2 (White and black African), M3 (White and Asian), M9 (Any other mixed background); A1 (Asian-Indian), A2 (Asian-Pakistani), A3 (Asian-Bangladeshi), A9 (Any other Asian background); B1 (Black Caribbean), B2 (Black African), B3 (Any other black background); O1 (Chinese), O9 (Any other). Some of the characteristics that constitute these groupings are biological and some are learned (cultural, linguistic, etc.) traits that are easy to notice.
In many countries, such as France, the state is legally banned from maintaining data based on race, which often makes the police issue wanted notices to the public that include labels like "dark skin complexion", etc.
In the United States, the practice of racial profiling has been ruled to be both unconstitutional and a violation of civil rights. There is active debate regarding the cause of a marked correlation between the recorded crimes, punishments meted out, and the country's populations. Many consider de facto racial profiling an example of institutional racism in law enforcement. The history of misuse of racial categories to impact adversely one or more groups and/or to offer protection and advantage to another has a clear impact on debate of the legitimate use of known phenotypical or genotypical characteristics tied to the presumed race of both victims and perpetrators by the government.
Mass incarceration in the United States disproportionately impacts African American and Latino communities. Michelle Alexander, author of The New Jim Crow: Mass Incarceration in the Age of Colorblindness (2010), argues that mass incarceration is best understood as not only a system of overcrowded prisons. Mass incarceration is also, "the larger web of laws, rules, policies, and customs that control those labeled criminals both in and out of prison." She defines it further as "a system that locks people not only behind actual bars in actual prisons, but also behind virtual bars and virtual walls," illustrating the second-class citizenship that is imposed on a disproportionate number of people of color, specifically African-Americans. She compares mass incarceration to Jim Crow laws, stating that both work as racial caste systems.
Recent work using DNA cluster analysis to determine race background has been used by some criminal investigators to narrow their search for the identity of both suspects and victims. Proponents of DNA profiling in criminal investigations cite cases where leads based on DNA analysis proved useful, but the practice remains controversial among medical ethicists, defense lawyers and some in law enforcement.
Forensic anthropology[edit | edit source]
Similarly, forensic anthropologists draw on highly heritable morphological features of human remains (e.g. cranial measurements) to aid in the identification of the body, including in terms of race. In a 1992 article anthropologist Norman Sauer noted that Anthropologists had generally abandoned the concept of race as a valid representation of human biological diversity except for Forensic anthropologists. This lead him to ask "if races don't exist, why are forensic anthropologists so good at identifying them?" He concluded that "the successful assignment of race to a skeletal specimen is not a vindication of the race concept, but rather a prediction that an individual, while alive was assigned to a particular socially constructed ‘racial’ category. A specimen may display features that point to African ancestry. In this country that person is likely to have been labeled Black regardless of whether or not such a race actually exists in nature. C. Loring Brace echoed this answer stating that: "The simple answer is that, as members of the society that poses the question, they are inculcated into the social conventions that determine the expected answer. They should also be aware of the biological inaccuracies contained in that "politically correct" answer. Skeletal analysis provides no direct assessment of skin color, but it does allow an accurate estimate of original geographical origins. African, eastern Asian, and European ancestry can be specified with a high degree of accuracy. Africa of course entails "black," but "black" does not entail African."
Commercial determination of ancestry[edit | edit source]
New research in molecular genetics, and the marketing of genetic identities through the analysis of one's Y chromosome, mtDNA, or autosomal DNA to the general public in the form of "Personalized Genetic Histories" (PGH) has caused debate.
Typically, a consumer of a commercial PGH service sends in a sample of DNA which is analyzed by molecular biologists and is sent a report. Shriver and Kittles remarked:
For many customers of lineage-based tests, there is a lack of understanding that their maternal and paternal lineages do not necessarily represent their entire genetic make-up. For example, an individual might have more than 85% Western European 'genomic' ancestry but still have a West African mtDNA or NRY lineage.
Nevertheless, they acknowledge, such stories are increasingly appealing to the general public.
Through these reports, advances in molecular genetics are used to create or confirm stories have about social identities. Abu el-Haj argued that genetic lineages, like older notions of race, suggest some idea of biological relatedness, but unlike older notions of race they are not directly connected to claims about human behaviour or character. She said that "postgenomics does seem to be giving race a new lease on life."
Race science was never just about classification. It presupposed a distinctive relationship between "nature" and "culture," understanding the differences in the former to ground and to generate the different kinds of persons ("natural kinds") and the distinctive stages of cultures and civilizations that inhabit the world.
Abu el-Haj argues that genomics and the mapping of lineages and clusters liberates "the new racial science from the older one by disentangling ancestry from culture and capacity." As an example, she refers to recent work by Hammer et al., which aimed to test the claim that present-day Jews are more closely related to one another than to neighbouring non-Jewish populations. Hammer et al. found that the degree of genetic similarity among Jews shifted depending on the locus investigated, and suggested that this was the result of natural selection acting on particular loci. They therefore focused on the non-recombining Y chromosome to "circumvent some of the complications associated with selection".
As another example she points to work by Thomas et al., who sought to distinguish between the Y chromosomes of Jewish priests (Kohanim), (in Judaism, membership in the priesthood is passed on through the father's line) and the Y chromosomes of non-Jews. Abu el-Haj concluded that this new "race science" calls attention to the importance of "ancestry" (narrowly defined, as it does not include all ancestors) in some religions and in popular culture, and people's desire to use science to confirm their claims about ancestry; this "race science," she argues, is fundamentally different from older notions of race that were used to explain differences in human behaviour or social status:
As neutral markers, junk DNA cannot generate cultural, behavioural, or, for that matter, truly biological differences between groups ... mtDNA and Y-chromosome markers relied on in such work are not "traits" or "qualities" in the old racial sense. They do not render some populations more prone to violence, more likely to suffer psychiatric disorders, or for that matter, incapable of being fully integrated – because of their lower evolutionary development – into a European cultural world. Instead, they are "marks," signs of religious beliefs and practices… it is via biological noncoding genetic evidence that one can demonstrate that history itself is shared, that historical traditions are (or might well be) true."
Stephan Palmié has responded to Abu el-Haj's claim that genetic lineages make possible a new, politically, economically, and socially benign notion of race and racial difference by suggesting that efforts to link genetic history and personal identity will inevitably "ground present social arrangements in a time-hallowed past," that is, use biology to explain cultural differences and social inequalities.
One problem with these assignments is admixture. Many people have a varied ancestry. For example, in the United States, most people who self-identify as African American have some European ancestors. In a survey of college students who self-identified as "white" in a northeastern U.S. university, ~30% were estimated to have <90% European ancestry.
On the other hand, there are tests that do not rely on molecular lineages, but rather on correlations between allele frequencies, often when allele frequencies correlate these are called clusters. These sorts of tests use informative alleles called Ancestry-informative marker (AIM), which although shared across all human populations vary a great deal in frequency between groups of people living in geographically distant parts of the world.
These tests use contemporary people sampled from certain parts of the world as references to determine the likely proportion of ancestry for any given individual. In a recent Public Service Broadcasting (PBS) programme on the subject of genetic ancestry testing the academic Henry Louis Gates: "wasn’t thrilled with the results (it turns out that 50 percent of his ancestors are likely European)". Charles Rotimi, of Howard University's National Human Genome Center, argued in 2003 that —that "the nature or appearance of genetic clustering (grouping) of people is a function of how populations are sampled, of how criteria for boundaries between clusters are set, and of the level of resolution used" all bias the results—and concluded that people should be very cautious about relating genetic lineages or clusters to their own sense of identity.
On the other hand, Rosenberg (2005) argued that if enough genetic markers and subjects are analyzed, then the clusters found are consistent. How many genetic markers a commercial service uses likely varies, although new technology has continually allowed increasing numbers to be analyzed.
See also[edit | edit source]
References[edit | edit source]
- ^ See: *Lie 2004 *Thompson & Hickey 2005 *Gordon 1964 *AAA 1998 *Palmié 2007 *Mevorach 2007 *Segal 1991 *Bindon 2005
- ^ King 2007: For example, "the association of blacks with poverty and welfare ... is due, not to race per se, but to the link that race has with poverty and its associated disadvantages"–p.75.
- ^ Schaefer 2008: "In many parts of Latin America, racial groupings are based less on the biological physical features and more on an intersection between physical features and social features such as economic class, dress, education, and context. Thus, a more fluid treatment allows for the construction of race as an achieved status rather than an ascribed status as is the case in the United States"
- ^ Graves 2001
- ^ a b Lee et al. 2008: "We caution against making the naive leap to a genetic explanation for group differences in complex traits, especially for human behavioral traits such as IQ scores"
- ^ a b c d Keita et al. 2004
- ^ AAPA 1996 Pure races, in the sense of genetically homogeneous populations, do not exist in the human species today, nor is there any evidence that they have ever existed in the past.-p.714
- ^ See:
- ^ Sober 2000
- ^ AAA 1998: For example, "Evidence from the analysis of genetics (e.g., DNA) indicates that most physical variation, about 94%, lies within so-called racial groups. Conventional geographic 'racial' groupings differ from one another only in about 6% of their genes. This means that there is greater variation within 'racial' groups than between them."
- ^ Steven A. Ramirez What We Teach When We Teach About Race: The Problem of Law and Pseudo-Economics 54 Journal of Legal Education 365 (2004)
- ^ American Anthropological Association's Statement on "Race" May 17 1998
- ^ American Association of Physical Anthropological, Statement on Biological Aspects of Race101 American Journal Physical Anthropology 569 1996
- ^ Steve Olson, Mapping Human History: Discovering the Past Through Our Genes, Boston, 2002
- ^ Lee 1997
- ^ See: *Blank, Dabady & Citro 2004 *Smaje 1997
- ^ See: *Lee 1997 *Nobles 2000 *Morgan 1975 as cited in Lee 1997, p. 407
- ^ See: *Morgan 1975 as cited in Lee 1997, p. 407 *Smedley 2007 *Sivanandan 2000 *Crenshaw 1988 *Conley 2007 *Winfield 2007: "It was Aristotle who first arranged all animals into a single, graded scale that placed humans at the top as the most perfect iteration. By the late 19th century, the idea that inequality was the basis of natural order, known as the great chain of being, was part of the common lexicon."
- ^ Lee 1997 citing Morgan 1975 and Appiah 1992
- ^ See: *Sivanandan 2000 *Muffoletto 2003 *McNeilly et al. 1996: psychiatric instrument called the "Perceived Racism Scale" "provides a measure of the frequency of exposure to many manifestations of racism ... including individual and institutional"; also assesses motional and behavioral coping responses to racism." *Miles 2000
- ^ Owens & King 1999
- ^ See: *Brace 2000 *Gill 2000 *Lee 1997: "The very naturalness of 'reality' is itself the effect of a particular set of discursive constructions. In this way, discourse does not simply reflect reality, but actually participates in its construction"
- ^ "race". Oxford Dictionaries. April 2010. Oxford University Press. http://oxforddictionaries.com/definition/english/race--2 (accessed July 31, 2012).
- ^ a b Marks 2008, p. 28
- ^ Marco Polo, in the 13th century, writes of the North Persians: "The people are of the Mahometan religion. They are in general a handsome race, especially the women, who, in my opinion, are the most beautiful in the world."; Polo 2007, p. 41
- ^ Smedley 2007
- ^ a b Smedley 1999
- ^ Meltzer 1993
- ^ Takaki 1993
- ^ Banton 1977
- ^ For examples see: :*Lewis 1990 :*Dikötter 1992
- ^ a b c d Race, Ethnicity, and Genetics Working Group (October 2005). "The use of racial, ethnic, and ancestral categories in human genetics research". American Journal of Human Genetics 77 (4): 519–32. DOI:10.1086/491747. PMID 16175499. Cite error: Invalid
<ref>tag; name "REGWG" defined multiple times with different content
- ^ Todorov 1993
- ^ Brace 2005, p. 27
- ^ Slotkin (1965), p. 177.
- ^ a b c Graves 2001, p. 39
- ^ a b Marks 1995
- ^ Graves 2001, pp. 42–43
- ^ Stocking 1968, pp. 38–40
- ^ Desmond & Moore 2009, pp. 332–341
- ^ a b c d e Lieberman & Jackson 1995
- ^ Camilo J. Cela-Conde and Francisco J. Ayala. 2007. Human Evolution Trails from the Past Oxford University Press p. 195
- ^ Lewin, Roger. 2005. Human Evolution an illustrated introduction. Fifth edition. p. 159. Blackwell
- ^ Reich D, Patterson N, Kircher M, et al. (October 2011). "Denisova admixture and the first modern human dispersals into Southeast Asia and Oceania". Am. J. Hum. Genet. 89 (4): 516–28. DOI:10.1016/j.ajhg.2011.09.005. PMID 21944045.
- ^ Human genetic diversity and the nonexistence of biological races, 2009
- ^ Human genetic diversity: Lewontin's fallacy, Edwards, 2003
- ^ * (2007) "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Witherspoon DJ, Wooding S, Rogers AR, et al. (May 2007). "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Currell & Cogdell 2006
- ^ Cravens 2010
- ^ See: *Cravens 2010 *Angier 2000 *Amundson 2005 *Reardon 2005
- ^ See: *Smedley 2002 *Boas 1912
- ^ See: *Marks 2002 *Montagu 1941 *Montagu 1942
- ^ Wilson & Brown 1953
- ^ See: *Keita et al. 2004 *Templeton 1998 *Long & Kittles 2003
- ^ Haig et al. 2006
- ^ a b Waples & Gaggiotti 2006
- ^ a b c d e Templeton 1998
- ^ See: *Amadon 1949 *Mayr 1969 *Patten & Unitt 2002
- ^ a b Wright 1978
- ^ See: *Keita et al. 2004 *Templeton 1998
- ^ (2006) "Understanding Race and Human Variation: A Public Education Program". Anthropology News 47 (2): 7. DOI:10.1525/an.2006.47.2.7.
- ^ Brace 1964
- ^ a b Livingstone & Dobzhansky 1962
- ^ Ehrlich & Holm 1964
- ^ Weiss 2005
- ^ Marks 2002
- ^ Krulwich 2009
- ^ Boyd 1950
- ^ Lieberman & Kirk 1997, p. 195
- ^ Molnar 1992
- ^ Human Genome Project 2003
- ^ a b c Graves 2006
- ^ Lewontin 1972
- ^ Keita et al. 2004 , Bamshad et al. 2004 , Tishkoff & Kidd 2004 , Jorde Wooding2004
- ^ Wilson et al. 2001, Cooper, Kaufman & Ward 2003 (given in summary by Bamshad et al. 2004, p. 599)
- ^ (Schwartz 2001), (Stephens 2003) (given in summary by Bamshad et al. 2004, p. 599)
- ^ Smedley & Smedley 2005, (Helms et al. 2005), . Lewontin, for example argues that there is no biological basis for race on the basis of research indicating that more genetic variation exists within such races than among them (Lewontin 1972).
- ^ Long & Kittles 2003
- ^ Edwards 2003
- ^ See: *Cavalli-Sforza, Menozzi & Piazza 1994 *Bamshad et al. 2004, p. 599 *Tang et al. 2004 *Rosenberg et al. 2005: "If enough markers are used... individuals can be partitioned into genetic clusters that match major geographic subdivisions of the globe."
- ^ Mountain & Risch 2004
- ^ Gitschier 2005
- ^ Witherspoon et al. 2007
- ^ Witherspoon DJ, Wooding S, Rogers AR, et al. (May 2007). "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 358. DOI:10.1534/genetics.106.067355. PMID 17339205.
- ^ Loring Brace, C. 2005. Race is a four letter word. Oxford University Press.
- ^ Kaplan, Jonathan Michael (January 2011) ‘Race’: What Biology Can Tell Us about a Social Construct. In: Encyclopedia of Life Sciences (ELS). John Wiley & Sons, Ltd: Chichester
- ^ Graves, Joseph. 2001. The Emperor's New Clothes. Rutgers University Press
- ^ Weiss KM and Fullerton SM (2005) Racing around, getting nowhere. Evolutionary Anthropology 14: 165–169
- ^ Gordon 1964
- ^ "New Ideas, New Fuels: Craig Venter at the Oxonian". FORA.tv. 2008-11-03. http://fora.tv/2008/07/30/New_Ideas_New_Fuels_Craig_Venter_at_the_Oxonian#chapter_17. Retrieved 2009-04-18.
- ^ (May 2007) "Genomics, divination, 'racecraft'". American Ethnologist 34: 205–22. DOI:10.1525/ae.2007.34.2.205.
- ^ (2007) "Race, racism, and academic complicity". American Ethnologist 34: 238. DOI:10.1525/ae.2007.34.2.238.
- ^ Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 23.
- ^ Imani Perry, More Beautiful and More Terrible: The Embrace and Transcendence of Racial Inequality in the United States (New York, NY: New York University Press, 2011), 24.
- ^ Tang H, Quertermous T, Rodriguez B, et al. (February 2005). "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". American Journal of Human Genetics 76 (2): 268–75. DOI:10.1086/427888. PMID 15625622.
- ^ Risch N (July 2005). "The whole side of it--an interview with Neil Risch by Jane Gitschier". PLoS Genetics 1 (1): e14. DOI:10.1371/journal.pgen.0010014. PMID 17411332.
- ^ Grosfoguel, Ramán (September 2004). "Race and Ethnicity or Racialized Ethnicities? Identities within Global Coloniality". Ethnicities 4 (3). DOI:10.1177/1468796804045237. Retrieved on 2012-08-06.
- ^ Harris 1980
- ^ The Council of the European Union Council Directive 2000/43/EC of 29 June 2000 implementing the principle of equal treatment between persons irrespective of racial or ethnic origin
- ^ European Union Directives on the Prohibition of Discrimination Icelandic Human Rights Centre
- ^ Mark Bell Racism and Equality in the European Union Oxford University Press, publication date: 2009, Print ISBN-13: 9780199297849, DOI:10.1093/acprof:oso/9780199297849.001.0001
- ^ Mark Bell Racism and Equality in the European Union Oxford University Press, publication date: 2009, Print ISBN-13: 9780199297849, DOI:10.1093/acprof:oso/9780199297849.001.0001
- ^ Race Policy in France by Erik Bleich, Middlebury College, 2012-05-01
- ^ Sexton, Jared (2008). Amalgamation Schemes. Univ of Minnesota Press.
- ^ Nobles 2000
- ^ "Revisions to the Standards for the Classification of Federal Data on Race and Ethnicity". Office of Management and Budget. 1997-10-30. http://www.whitehouse.gov/omb/fedreg/1997standards.html. Retrieved 2009-03-19. Also: U.S. Census Bureau Guidance on the Presentation and Comparison of Race and Hispanic Origin Data and B03002. HISPANIC OR LATINO ORIGIN BY RACE; 2007 American Community Survey 1-Year Estimates
- ^ (2003) "'Race' Still an Issue for Physical Anthropology? Results of Polish Studies Seen in the Light of the U.S. Findings". American Anthropologist 105: 116–24. DOI:10.1525/aa.2003.105.1.116.
- ^ The race concept in six regions: variation without consensus, Lieberman L, Kaszycka KA, Martinez Fuentes AJ, Yablonsky L, Kirk RC, Strkalj G, Wang Q, Sun L., Coll Antropol. 2004 Dec;28(2):907-21, http://www.ncbi.nlm.nih.gov/pubmed/15666627
- ^ Current Views of European Anthropologists on Race: Influence of Educational and Ideological Background, Katarzyna A. Kaszycka, Goran Štrkalj, Jan Strzalko, American Anthropologist Volume 111, Issue 1, pages 43–56, March 2009, doi:10.1111/j.1548-1433.2009.01076.x
- ^ The decline of race in American physical anthropology Leonard Lieberman, Rodney C. Kirk, Michael Corcoran. 2003. Department of Sociology and Anthropology, Central Michigan University, Mt. Pleasant, MI. 48859, USA
- ^ (2003) "Perishing Paradigm: Race1931-99". American Anthropologist 105: 110. DOI:10.1525/aa.2003.105.1.110.
A following article in the same issue questions the precise rate of decline, but from their opposing perspective agrees that the Negroid/Caucasoid/Mongoloid paradigm has fallen into near-total disfavor: (2003) "Surveying the Race Concept: A Reply to Lieberman, Kirk, and Littlefield". American Anthropologist 105: 114. DOI:10.1525/aa.2003.105.1.114.
- ^ "American Anthropological Association Statement on "Race"". Aaanet.org. 1998-05-17. http://www.aaanet.org/stmts/racepp.htm. Retrieved 2009-04-18.
- ^ Bindon, Jim. University of Alabama. "Post World War II". 2005. August 28, 2006.
- ^ (February 2001) "How "Caucasoids" got such big crania and why they shrank. From Morton to Rushton." (PDF). Current anthropology 42 (1): 69–95. DOI:10.1086/318434. PMID 14992214.
- ^ (2007) "The Status of the Race Concept in Contemporary Biological Anthropology: A Review" (PDF). Anthropologist.
- ^ a b Does race exist? A proponent’s perspective. Gill GW. (2000) PBS. http://www.pbs.org/wgbh/nova/first/gill.html
- ^ http://www.pbs.org/wgbh/nova/first/brace.html
- ^ See: *Gill 2000 *Armelagos & Smay 2000 *Risch et al. 2002 *Bloche 2004
- ^ C. Loring Brace, 1995. "Region Does not Mean "Race"--Reality Versus Convention in Forensic Anthropology," Journal of Forensic Sciences 40 (#2): 29-33.
- ^ Frederick P. Rivara and Laurence Finberg, "Use of the Terms Race and Ethnicity," Archives of Pediatrics & Adolescent Medicine 155, no. 2 (2001): 119. "In future issues of the ARCHIVES, we ask authors to not use race and ethnicity when there is no biological, scientific, or sociological reason for doing so. Race or ethnicity should not be used as explanatory variables, when the underlying constructs are variables that can, and should, be measured directly (eg, educational level of subjects, household income of the families, single vs 2-parent households, employment of parents, owning vs renting one's home, and other measures of socioeconomic status). In contrast, the recent attention on decreasing health disparities uses race and ethnicity not as explanatory variables but as ways of examining the underlying sociocultural reasons for these disparities and appropriately targeting attention and resources on children and adolescents with poorer health. In select issues and questions such as these, use of race and ethnicity is appropriate."
- ^ See program announcement and requests for grant applications at the NIH website, at nih.gov.
- ^ Robert S. Schwartz, "Racial Profiling in Medical Research," The New England Journal of Medicine, 344 (no, 18, May 3, 2001)
- ^ Lieberman, Leonard, Raymond E. Hampton, Alice Littlefield, and Glen Hallead. 1992. "Race in Biology and Anthropology: A Study of College Texts and Professors." Journal of Research in Science Teaching 29 (3): 301–21.
- ^ Reconstructing Race in Science and Society:Biology Textbooks, 1952–2002, Ann Morning, American Journal of Sociology. 2008;114 Suppl:S106-37.
- ^ PMID 19026975 (PubMed
- ^ The presentation of human biological diversity in sport and exercise science textbooks: the example of "race.", Christopher J. Hallinan, Journal of Sport Behavior, March 1994
- ^ The conceptualization and operationalization of race and ethnicity by health services researchers, Susan Moscou, Nursing Inquiry, Volume 15, Issue 2, pages 94–105, June 2008
- ^ Human Biological Variation in Anatomy Textbooks: The Role of Ancestry, Goran Štrkalj and Veli Solyali, Studies on Ethno-Medicine, 4(3): 157-161 (2010)
- ^ Herrnstein & Murray 1996, pp. 413–414
- ^ Gould, S. J. (1981). The Mismeasure of Man. New York: W.W. Norton & Co. passim
- ^ Sternberg, Grigorenko, Kidd (2005). "Intelligence, race, and genetics". American Psychologist 60.
- ^ Office of Minority Health
- ^ a b c Risch et al. 2002
- ^ a b Condit et al. 2003
- ^ Lee et al. 2008
- ^ (2009) "Beyond BiDil: the Expanding Embrace of Race in Biomedical Research and Product Development" (PDF). St. Louis University Journal of Health Law & Policy 3: 61–92. Retrieved on 30 December 2010. ; In 2005, the Food and Drug Administration licensed a drug, BiDil, targeted specifically for the treatment of heart disease in African Americans. The recommendation of the drug for "blacks" is criticized because clinical trials were limited only to self-identified African Americans. It has been conceded by the trial investigators that there is no basis to claim the drug works differently in any other population. However, being approved and marketed to African Americans only, that specificity alone has been used in turn to claim genetic differences.
- ^ In summary, Condit et al. (2003) argues that, in order to predict the clinical success of pharmacogenomic research, scholars must conduct subsidiary research on two fronts: Science, wherein the degree of correspondence between popular and professional racial categories can be assessed; and society at large, through which attitudinal factors moderate the relationship between scientific soundness and societal acceptance. To accept race-as-proxy, then, may be necessary but insufficient to solidify the future of race-based pharmacogenomics.
- ^ Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 13.
- ^ Michelle Alexander, The New Jim Crow: Mass Incarceration in the Age of Colorblindness (New York, NY: The New Press, 2010), 12.
- ^ Abraham 2009
- ^ Willing 2005
- ^ a b Sauer 1992
- ^ Brace CL. 1995. J Forensic Sci. Mar;40(2):171-5. Region does not mean "race"--reality versus convention in forensic anthropology.
- ^ a b Shriver & Kittles 2004
- ^ Thomas MG, Skorecki K, Ben-Ami H, Parfitt T, Bradman N, Goldstein DB (July 1998). "Origins of Old Testament priests". Nature 394 (6689): 138–40. DOI:10.1038/28083. PMID 9671297.
- ^ (2007) "Rethinking genetic genealogy: A response to Stephan Palmié". American Ethnologist 34: 223. DOI:10.1525/ae.2007.34.2.223.
- ^ (2007) "Rejoinder: Genomic moonlighting, Jewish cyborgs, and Peircian abduction". American Ethnologist 34: 245. DOI:10.1525/ae.2007.34.2.245.
- ^ Frank, Reanne. "Back with a Vengeance: the Reemergence of a Biological Conceptualization of Race in Research on Race/Ethnic Disparities in Health". Retrieved on 2009-04-18.
- ^ Rotimi CN (December 2003). "Genetic ancestry tracing and the African identity: a double-edged sword?". Developing World Bioethics 3 (2): 151–8. DOI:10.1046/j.1471-8731.2003.00071.x. PMID 14768647.
- ^ Rosenberg NA, Mahajan S, Ramachandran S, Zhao C, Pritchard JK, Feldman MW (December 2005). "Clines, clusters, and the effect of study design on the inference of human population structure". PLoS Genetics 1 (6): e70. DOI:10.1371/journal.pgen.0010070. PMID 16355252.
Bibliography[edit | edit source]
- Abraham, Carolyn. "Molecular eyewitness: DNA gets a human face". The Globe and Mail (date=2009-04-07: Phillip Crawley). http://m.theglobeandmail.com/life/molecular-eyewitness-dna-gets-a-human-face/article888804/?service=mobile&template=shareEmail&tabInside_tab=0&page=1. Retrieved 2011-02-04.
- AAA (1998-05-17). "American Anthropological Association Statement on "Race"". Aaanet.org. http://www.aaanet.org/stmts/racepp.htm. Retrieved 2009-04-18.
- AAPA (1996). "AAPA statement on biological aspects of race". Am J Phys Anthropol 101: 569–570. DOI:10.1002/ajpa.1331010408.
- (1949) "The seventy-five percent rule for subspecies". Condor 51 (6): 250–258. DOI:10.2307/1364805.
- Amundson, Ron (2005). "Disability, Ideology, and Quality of Life: A Bias in Biomedical Ethics". In David T. Wasserman, Robert Samuel Wachbroit, Jerome Edmund Bickenbach. Quality of life and human difference: genetic testing, health care, and disability. Cambridge University Press. pp. 101–24. ISBN 9780521832014. http://books.google.com/books?id=9PvWVZIzoTIC&pg=PA107.
- Angier, Natalie (2000-08-22). "Do Races Differ? Not Really, DNA Shows". The New York Times. http://www.nytimes.com/library/national/science/082200sci-genetics-race.html. Retrieved 9 August 2010.
- Appiah, Kwame Anthony (1992). In My Father's House: Africa in the Philosophy of Culture. Oxford University Press. ISBN 9780195068528.
- Armelagos, George (2000). "Galileo wept: A critical assessment of the use of race in forensic anthropolopy". Transforming Anthropology 9: 19–29. DOI:10.1525/tran.2000.9.2.19.
- (2003-11-10) "Does Race Exist?". Scientific American Magazine.
- (August 2004) "Deconstructing the relationship between genetics and race". Nat. Rev. Genet. 5 (8): 598–609. DOI:10.1038/nrg1401. PMID 15266342.
- Banton, Michael (1977) (paperback). The idea of race. Boulder: Westview Press. ISBN 0891587195.
- (October 1990) "Relationships estimated by isonymy among the Italo-Greco villages of southern Italy". Human Biology 62 (5): 649–63. PMID 2227910.
- Blank, Rebecca M.; Dabady, Marilyn; Citro, Constance Forbes (2004). "Chapter 2". Measuring racial discrimination. National Research Council (U.S.). Panel on Methods for Assessing Discrimination. National Adademies Press. pp. 317. ISBN 9780309091268.
- Bindon, Jim (August 28, 2006, 2005). "Post World War II". University of Alabama. http://www.as.ua.edu/ant/bindon/ant275/presentations/POST_WWII.PDF#search=%22stanley%20marion%20garn%22.
- (2004) "Race-Based Therapeutics". New England Journal of Medicine 351 (20): 2035–2037. DOI:10.1056/NEJMp048271. PMID 15533852.
- (1912) "Change in Bodily Form of Descendants of Immigrants". American Anthropologist 14: 530–562. DOI:10.1525/aa.1912.14.3.02a00080.
- Boyd, William C. (1950). Genetics and the races of man: an introduction to modern physical anthropology. Boston: Little, Brown and Company. p. 207.
- Brace, CL (2000). "Does race exist? An antagonist's perspective". Pbs.org. http://www.pbs.org/wgbh/nova/first/brace.html. Retrieved 2010-10-11.
- Brace, CL (2005). Race is a four letter word. Oxford University Press. pp. 326. ISBN 9780195173512.
- (2003) "Attitudinal barriers to delivery of race-targeted pharmacogenomics among informed lay persons". Genetics in Medicine 5 (5): 385–392. DOI:10.1097/01.GIM.0000087990.30961.72. PMID 14501834.
- Conley, D (2007). "Being black, living in the red"". In PS Rothenberg. Race, Class, and Gender in the United States (7th ed.). New York: Worth Publishers. pp. 350–358.
- Cravens, Hamilton (2010). "What's New in Science and Race since the 1930s?: Anthropologists and Racial Essentialism". The Historian 72 (2).
- (1988) "Race, reform, and retrenchment: Transformation and legitimation in antidiscrimination law". Harvard Law Review 101 (7): 1331–1337. DOI:10.2307/1341398.
- (2003) "Race and genomics". N Engl J Med 348 (12): 1166–1170. DOI:10.1056/NEJMsb022863. PMID 12646675.
- Currell, Susan; Cogdell, Christina (2006). Popular Eugenics: National Efficiency and American Mass Culture in The 1930s. Athens, OH: Ohio University Press. p. 203. ISBN 082141691X.
- Desmond, Adrian; Moore, James (2009), Darwin's sacred cause: how a hatred of slavery shaped Darwin's views on human evolution, Allen Lane, Penguin Books, pp. 484, ISBN 9781846140358
- Dikötter, Frank (1992). The discourse of race in modern China. Stanford: Stanford University Press. ISBN 9780804719940.
- Dobzhansky, T. (1970). Genetics of the Evolutionary Process. New York, NY: Columbia University Press. ISBN 0231028377.
- (2005) "Race and reification in science". Science 307 (5712): 1050–1051. DOI:10.1126/science.1110303. PMID 15718453.
- Edwards, AW (August 2003). "Human genetic diversity: Lewontin's fallacy". Bioessays 25 (8): 798–801. DOI:10.1002/bies.10315. PMID 12879450.
- Ehrlich, Paul; Holm, Richard W. (1964). "A Biological View of Race". In Ashley Montagu. The Concept of Race. Collier Books. pp. 153–179.
- Gill, G (2000). "Does Race Exist? A proponent's perspective". Pbs.org. http://www.pbs.org/wgbh/nova/first/gill.html. Retrieved 2009-04-18.
- Gitschier, Jane (2005). "The Whole Side of It—An Interview with Neil Risch" 1 (1): e14. DOI:10.1371/journal.pgen.0010014. PMID 17411332.
- Gordon, Milton Myron (1964). Assimilation in American life: the role of race, religion, and national origins. Oxford: Oxford University Press. ISBN 978-0-19-500896-8.
- Graves, Joseph L (2001). The Emperor's New Clothes: Biological Theories of Race at the Millenium. Rutgers University Press.
- Graves, Joseph L. (2006). "What We Know and What We Don't Know: Human Genetic Variation and the Social Construction of Race". Social Science Research Council (SSRC). http://raceandgenomics.ssrc.org/Graves/. Retrieved 2011-01-22.
- (December 2006) "Taxonomic considerations in listing subspecies under the U.S. Endangered Species Act". Conservation Biology 20 (6): 1584–94. DOI:10.1111/j.1523-1739.2006.00530.x. PMID 17181793.
- Harris, Marvin (1980). Patterns of race in the Americas. Westport, Conn: Greenwood Press. ISBN 0-313-22359-9.
- Herrnstein, Richard; Murray, Charles (1996). The Bell Curve: Intelligence and class structure in American life. Simon & Schuster.
- Hooton, Earnest A (22 January 1926). "Methods of Racial Analysis". Science 63 (1621): 75–81. DOI:10.1126/science.63.1621.75.
- Human Genome Project (2003). "Human Genome Project Information: Minorities, Race, and Genomics". U.S. Department of Energy(DOE)-Human Genome Program. http://www.ornl.gov/sci/techresources/Human_Genome/elsi/minorities.shtml.
- (November 2004) "Genetic variation, classification and 'race'". Nat. Genet. 36 (11 Suppl): S28–33. DOI:10.1038/ng1435. PMID 15508000.
- (1997) "The persistence of racial thinking and the myth of racial divergence". Am Anthropol 99: 534–544. DOI:10.1525/aa.19188.8.131.524.
- (2004) "Conceptualizing human variation". Nature Genetics 36 (S17–S20). DOI:10.1038/ng1455. PMID 15507998.
- King, Desmond (2007). "Making people work: Democratic consequences of workfare". In Beem, Christopher; Mead, Lawrence M.. Welfare Reform and Political Theory. New York: Russell Sage Foundation Publications. pp. 65–81. ISBN 0-87154-588-8.
- Krulwich, Robert (2009-02-02). "Your Family May Once Have Been A Different Color". Morning Edition, National Public Radio. http://www.npr.org/templates/story/story.php?storyId=100057939.
- Lee, Jayne Chong-Soon (1997). "Review essay: Navigating the topology of race"". In Gates, E. Nathaniel. Critical Race Theory: Essays on the Social Construction and Reproduction of Race. 4:The Judicial Isolation of the "Racially" Oppressed. New York: Garland Pub. pp. 393–426. ISBN 9780815326038.
- (2008) "The ethics of characterizing difference: guiding principles on using racial categories in human genetics". Genome Biol. 9 (7): 404. DOI:10.1186/gb-2008-9-7-404. PMID 18638359.
- Lewis, B (1990). Race and slavery in the Middle East. New York: Oxford University Press. ISBN 0195062833.
- Lie, John (2004). Modern Peoplehood. Cambridge, Mass.: Harvard University Press. ISBN 0674013271.
- (2001) "How "Caucasoids" got such big crania and why they shrank: from Morton to Rushton". Curr Anthropol 42 (1): 69–95. DOI:10.1086/318434. PMID 14992214.
- Lieberman, Leonard; Kirk, Rodney (1997). "Teaching About Human Variation: An Anthropological Tradition for the Twenty-first Century". In Rice, Patricia; Kottak, Conrad Phillip; White, Jane G.; Richard H. Furlow. The Teaching of Anthropology: Problems, Issues, and Decisions. Mayfield Pub. pp. 381. ISBN 1-55934-711-2.
- (1995) "Race and Three Models of Human Origins". American Anthropologist 97 (2): 231–242. DOI:10.1525/aa.1995.97.2.02a00030.
- (1992) "Race in Biology and Anthropology: A Study of College Texts and Professors". Journal of Research in Science Teaching 29: 301–321. DOI:10.1002/tea.3660290308.
- (1972) "The Apportionment of Human Diversity". Evolutionary Biology 6: 381–397.
- (1962) "On the Non-Existence of Human Races". Current Anthropology 3: 279–281. DOI:10.1086/200290.
- (August 2003) "Human genetic diversity and the nonexistence of biological races". Human Biology 75 (4): 449–71. DOI:10.1353/hub.2003.0058. PMID 14655871. Retrieved on 2009-04-18.
- Marks, J (1995). Human biodiversity: genes, race, and history. New York: Aldine de Gruyter. ISBN 0-585-39559-4.
- Marks, Jonathan (2002). "Folk Heredity". In Jefferson M. Fish. Race and Intelligence: Separating Science from Myth. Mahwah, NJ: Lawrence Erlbaum Associates. p. 98. ISBN 0805837574.
- Marks, Jonathan (2008). "Race: Past, present and future. Chapter 1". In Barbara Koenig, Sandra Soo-Jin Lee & Sarah S. Richardson. Revisiting Race in a Genomic Age. Rutgers University Press.
- Mayr, E. (1969). Principles of Systematic Zoology. New York, NY: McGraw-Hill. ISBN 0070411433.
- (Winter 2002) "The Biology of Race and the Concept of Equality". Daedalus 31 (1): 89–94.
- (1996) "The perceived racism scale: A multidimensional assessment of the experience of white racism among African Americans" 6 (1–2): 154–166.
- Meltzer, M (1993). Slavery: a world history (revised ed.). Cambridge, MA: DaCapo Press. ISBN 0306805367.
- (2007) "Race, racism, and academic complicity". American Ethnologist 34: 238. DOI:10.1525/ae.2007.34.2.238.
- Miles, Robert (2000). "Apropos the idea of race ... again". In Les Back, John Solomos. Theories of race and racism. Psychology Press. pp. 125–143. ISBN 9780415156721.
- Molnar, Stephen (1992). Human variation: races, types, and ethnic groups. Englewood Cliffs, N.J: Prentice Hall. ISBN 0-13-446162-2.
- (1941) "The Concept of Race in The Human Species in the Light of Genetics" (PDF). Journal of Heredity 32 (8): 243–248.
- Montagu, Ashley (1997) (paperback). Man’s Most Dangerous Myth: The Fallacy of Race. AltaMira Press. ISBN 0803946481.
- Montagu, Ashley (1962). "The Concept of Race". Retrieved on 26 January 2009.
- Morgan, Edmund S. (1975). American Slavery, American Freedom: The Ordeal of Colonial Virginia. W. W. Norton and Company, Inc..
- Mountain, Joanna L. (2004). "Assessing genetic contributions to phenotypic differences among 'racial' and 'ethnic' groups" (pdf). DOI:10.1038/ng1456.
- Muffoletto, Robert (2003). "Ethics: A discourse of power" 47 (6): 62–66. DOI:10.1007/BF02763286.
- Nobles, Melissa (2000). Shades of citizenship: race and the census in modern politics. Stanford, Calif: Stanford University Press. ISBN 0-8047-4059-3.
- (2005) "Controversies in biomedical, behavioral, and forensic sciences". Am Psychol 60 (1): 115–128. DOI:10.1037/0003-066X.60.1.115. PMID 15641926.
- (1999) "Genomic Views of Human History". Science 286 (5439): 451–453. DOI:10.1126/science.286.5439.451. PMID 10521333.
- (May 2007) "Genomics, divination, 'racecraft'". American Ethnologist 34: 205–22. DOI:10.1525/ae.2007.34.2.205.
- (2002) "Diagnosability versus mean differences of sage sparrow subspecies". Auk 119 (1): 26–35. DOI:[0026:DVMDOS2.0.CO;2 10.1642/0004-8038(2002)119[0026:DVMDOS]2.0.CO;2].
- (March 2000) "Least-inclusive taxonomic unit: a new taxonomic concept for biology". Proceedings. Biological Sciences 267 (1443): 627–30. DOI:10.1098/rspb.2000.1048. PMID 10787169.
- Polo, Marco (2007). "Chapter 21: Of the Country travelled over upon leaving Ormus". The Travels of Marco Polo. Cosimo, Inc. pp. 408. ISBN 9781602068612.
- Race, Ethnicity, and Genetics Working Group (October 2005). "The use of racial, ethnic, and ancestral categories in human genetics research". American Journal of Human Genetics 77 (4): 519–32. DOI:10.1086/491747. PMID 16175499.
- Reardon, Jenny (2005). "Post World-War II Expert Discourses on Race". Race to the finish: identity and governance in an age of genomics. Princeton UP. pp. 17ff. ISBN 9780691118574. http://books.google.com/books?id=HMHiuOJIQcYC&pg=PA17.
- (2002) "Apportionment of global human genetic diversity based on craniometrics and skin color". Am J Phys Anthropol 118 (4): 393–398. DOI:10.1002/ajpa.10079. PMID 12124919.
- Risch, Neil (2002). "Categorization of humans in biomedical research: genes, race and disease" (PDF). Genome Biology 3 (7): comment2007. DOI:10.1186/gb-2002-3-7-comment2007. PMID 12184798.
- (April 2002) "Patterns of human diversity, within and among continents, inferred from biallelic DNA polymorphisms". Genome Res. 12 (4): 602–12. DOI:10.1101/gr.214902. PMID 11932244.
- (2005) "Clines, Clusters, and the Effect of Study Design on the Inference of Human Population Structure". PLoS Genetics 1 (6): e70. DOI:10.1371/journal.pgen.0010070. PMID 16355252.
- (1992) "Forensic Anthropology and the Concept of Race: If Races Don't Exist, Why are Forensic Anthropologists So Good at Identifying them". Social Science and Medicine 34 (2): 107–111. DOI:10.1016/0277-9536(92)90086-6. PMID 1738862.
- Sesardic, Neven (2010). "Race: A Social Destruction of a Biological Concept" 25 (143). DOI:10.1007/s10539-009-9193-7.
- Segal, Daniel A (1991). "'The European'_ Allegories of Racial Purity". Anthropology Today 7 (5): 7–9. DOI:10.2307/3032780.
- (September 2004) "Evidence for gradients of human genetic diversity within and among continents". Genome Res. 14 (9): 1679–85. DOI:10.1101/gr.2529604. PMID 15342553.
- Schaefer, Richard T. (ed.) (2008). Encyclopedia of Race, Ethnicity and Society. Sage. p. 1096. ISBN 9781412926942.
- (2004) "Opinion: Genetic ancestry and the search for personalized genetic histories". Nature Reviews Genetics 5 (8): 611–8. DOI:10.1038/nrg1405. PMID 15266343.
- Sivanandan, A (2000). "Apropos the idea of 'race' ... again"". In Miles R. Theories of Race and Racism. London: Routledge. pp. 125–143.
- Slotkin, J. S. (1965). "The Eighteenth Century". Readings in early Anthropology. Methuen Publishing. pp. 175–243.
- (1997) "Not just a social construct: Theorising race and ethnicity". Sociology 31 (2): 307–327. DOI:10.1177/0038038597031002007.
- Smedley, A (1999). Race in North America: origin and evolution of a worldview (2nd ed.). Boulder: Westview Press. ISBN 0813334489.
- Smedley, Audrey (2002). "Science and the Idea of Race: A Brief History". In Jefferson M. Fish. Race and Intelligence: Separating Science from Myth. Mahwah, NJ: Lawrence Erlbaum Associates. p. 172. ISBN 0805837574.
- (January 2005) "Race as biology is fiction, racism as a social problem is real: Anthropological and historical perspectives on the social construction of race". Am Psychol 60 (1): 16–26. DOI:10.1037/0003-066X.60.1.16. PMID 15641918.
- Smedley, Audrey (2007-March-14-17). "The History of the Idea of Race... and Why It Matters".
- Sober, Elliott (2000). Philosophy of biology (2nd ed.). Boulder, CO: Westview Press. ISBN 978-0813391267.
- Stanton, W (1982) . The leopard's spots: scientific attitudes toward race in America, 1815–1859. University of Chicago Press. ISBN 0226771229.
- Stocking, George W. (1968). Race, Culture and Evolution: Essays in the History of Anthropology. University of Chicago Press. pp. 380. ISBN 9780226774947.
- Takaki, R (1993) (paperback). A different mirror: a history of multicultural America. Boston: Little, Brown. ISBN 0316831123.
- (2005) "Genetic Structure, Self-Identified Race/Ethnicity, and Confounding in Case-Control Association Studies". The American Journal of Human Genetics 76 (2): 268–75. DOI:10.1086/427888. PMID 15625622.
- (1998) "Human races: a genetic and evolutionary perspective". Am Anthropol 100: 632–650. DOI:10.1525/aa.19184.108.40.2062.
- Thompson, William; Hickey, = Joseph (2005). Society in Focus. Boston, MA: Pearson. ISBN 0-205-41365-X.
- (2004) "Implications of biogeography of human populations for 'race' and medicine". Nature Genetics 36 (11 Suppl): S21. DOI:10.1038/ng1438. PMID 15507999.
- Todorov, T (1993). On human diversity. Cambridge, MA: Harvard University Press,. ISBN 0674634381.
- (2006) "What is a population? An empirical evaluation of some genetic methods for identifying the number of gene pools and their degree of connectivity". Molecular Ecology 15 (6): 1419–39. DOI:10.1111/j.1365-294X.2006.02890.x. PMID 16629801.
- Weiss, Rick (2005-12-16). "Scientists Find A DNA Change That Accounts For Light Skin". The Washington Post. http://www.washingtonpost.com/wp-dyn/content/article/2005/12/15/AR2005121501728.html.
- Willing, Richard (2005-08-16). "DNA tests offer clues to suspect's race". USA Today. http://www.usatoday.com/news/nation/2005-08-16-dna_x.htm.
- (1953) "The Subspecies Concept and Its Taxonomic Application". Systematic Zoology 2 (3): 97–110. DOI:10.2307/2411818.
- (2001) "Population genetic structure of variable drug response". Nat Genet 29 (3): 265–269. DOI:10.1038/ng761. PMID 11685208.
- Winfield, AG (2007). Eugenics and education in America: Institutionalized racism and the implications of history, ideology, and memory. New York: Peter Lang Publishing, Inc. pp. 45–46.
- (2007) "Genetic Similarities Within and Between Human Populations". Genetics 176 (1): 351–9. DOI:10.1534/genetics.106.067355. PMID 17339205.
- Wright, Sewall (1978). Evolution and the Genetics of Populations. 4, Variability Within and Among Natural Populations. Chicago, Illinois: Univ. Chicago Press. p. 438.
- von Vacano, Diego. "The Color of Citizenship: Race, Modernity and Latin American/Hispanic Political Thought". Oxford: Oxford University Press, 2011.
[edit | edit source]
- Race: the Power of an Illusion a three part documentary from California Newsreel.
- James, Michael (2008) Race, in the Stanford Encyclopedia of Philosophy.
- Ten Things Everyone Should Know About Race by California Newsreel.
- American Anthropological Association's educational website on race with links for primary school educators and researchers
- Boas's remarks on race to a general audience
- Catchpenny mysteries of ancient Egypt, "What race were the ancient Egyptians?", Larry Orcutt.
- Judy Skatssoon, "New twist on out-of-Africa theory", ABC Science Online, Wednesday, 14 July 2004.
- Racial & Ethnic Distribution of ABO Blood Types – bloodbook.com
- Are White Athletes an Endangered Species? And Why is it Taboo to Talk About It? Discussion of racial differences in athletics
- "Does Race Exist? A proponent's perspective" – Author argues that the evidence from forensic anthropology supports the idea of race.
- "Does Race Exist? An antagonist's perspective" – The author argues that clinal variation undermines the idea of race.
- American Ethnography – The concept of race Ashley Montagu's 1962 article in American Anthropology
- American Ethnography – The genetical theory of race, and anthropological method Ashley Montagu's 1942 American Anthropology article
Official statements and standards[edit | edit source]
- "The Race Question", UNESCO, 1950
- US Census Bureau: Definition of Race
- American Association of Physical Anthropologists' Statement on Biological Aspects of Race
- "Standards for Maintaining, Collecting, and Presenting Federal Data on Race and Ethnicity", Federal Register 1997
- American Anthropological Association's Statement on Race and RACE: Are we so different? a public education program developed by the American Anthropological Association.
Popular press[edit | edit source]
- "Race (human)" article on Encyclopædia Britannica Online.
- The Myth of Race On the lack of scientific basis for the concept of human races (Medicine Magazine, 2007).
- Race – The power of an illusion Online companion to California Newsreel's documentary about race in society, science, and history
- Steven and Hilary Rose, The Guardian, "Why we should give up on race", 9 April 2005
- Times Online, "Gene tests prove that we are all the same under the skin", 27 October 2004.
- Michael J. Bamshad, Steve E. Olson "Does Race Exist?", Scientific American, December 2003
- "Gene Study Identifies 5 Main Human Populations, Linking Them to Geography", Nicholas Wade, NYTimes, December 2002. Covering
- Scientific American Magazine (December 2003 Issue) Does race exists ?.
- DNA Study published by United Press International showing how 30% of White Americans have at least one Black ancestor
- Yehudi O. Webster Twenty-one Arguments for Abolishing Racial Classification, The Abolitionist Examiner, June 2000
- The Tex(t)-Mex Galleryblog, An updated, online supplement to the University of Texas Press book (2007), Tex(t)-Mex
- Times of India – Article about Asian racism
- South China Morning Post – Going beyond ‘sorry’
- Is Race "Real"? forum organized by the Social Science Research Council, includes 2005 op-ed article by A.M. Leroi from the New York Times advocating biological conceptions of race and responses from scholars in various fields More from Leori with responses
- Richard Dawkins: Race and creation (extract from The Ancestor's Tale: A Pilgrimage to the Dawn of Life) – On race, its usage and a theory of how it evolved. (Prospect Magazine October 2004)
|This page uses content from the English language Wikipedia. The original content was at Race (human classification). The list of authors can be seen in the page history. As with this Familypedia wiki, the content of Wikipedia is available under the Creative Commons License.| |
While the idea of dark matter was originally proposed to explain the structure of galaxies, one of its great successes was explaining the nature of the Universe itself. Features of the Cosmic Microwave Background can be explained by the presence of dark matter. And models of the early Universe produce galaxies and galaxy clusters by building on structures formed by dark matter. The fact that these models get the big picture so right has been a strong argument in their favor.
But a new study suggests that the same models get the details wrong—by an entire order of magnitude. The people behind the study suggest that either there’s something wrong with the models, or our understanding of dark matter may need an adjustment.
Under a lens
The new study, performed by an international team of researchers, took advantage of a phenomenon called gravitational lensing. Gravity warps space itself, and it can do so in a way that bends light, analogous to a lens. If a massive object—say, a galaxy—sits between us and a distant object, it can create a gravitational lens that magnifies or distorts the distant object. Depending on the precise details of how the objects are arranged, the results can be anything from a simple magnification to circular rings or having the object appear multiple times.
Because dark matter’s effects are detectable via gravity, we can “see” the presence of dark matter via its gravitational-lensing effects. In a few cases, we’ve even detected lensing where little matter is present. That’s one of the many pieces of evidence in favor of dark matter.
The researchers used gravitational lensing to set up a test that, at least conceptually, was very simple. We’ve built models of the early Universe that indicate how dark matter helped structure the first galaxies and drew them into clusters of galaxies. These models, when run forward, provide a description of what that dark matter distribution should look like at different points in the Universe’s history up to the present. So the researchers decided to use gravitational lensing to determine whether the dark matter distribution seen in the models matched where we see it via gravitational lensing.
According to these models, the Universe was built hierarchically. Via gravitational interactions with itself, dark matter formed filaments that intersected in a complex, three-dimensional meshwork. The additional gravitational pull at the points where filaments intersected would draw in regular matter, leading to the first galaxies. Over time, the continued draw of gravity pulled galaxies together, forming large clusters. By examining the output of these models, we can get a look at the expected distribution of dark matter around clusters. And by zooming in, we can see how dark matter should be distributed in the area of individual galaxies.
That distribution of dark matter can be viewed as a prediction of the models.
Meanwhile, in the actual Universe…
To test those predictions, the researchers used images from the Hubble space telescope to map out all the objects in and around a large collection of galaxy clusters. Follow-up imaging using the Very Large Telescope helped identify the distance of those objects based on how much their light was shifted to the red end of the spectrum by the expansion of the Universe—the larger the redshift, the more distant the object. This allowed the researchers to determine which objects must be behind the galaxy cluster and thus potential candidates for gravitational lensing.
A software package then used the data to create a mass distribution for each galaxy cluster. This included the overall lensing effects of the entire cluster, as well as the sub-lensing driven by individual galaxies within the cluster. The researchers found a strong agreement between the appearance of lensed objects and the location of individual galaxies, which allowed them to validate their mass-distribution calculations.
The researchers then used the Universe simulator to build 25 simulated clusters and performed a similar analysis with the clusters. They did so in order to identify the sites of possible lensing and the locations that could create the greatest distortions.
The two didn’t match. There were significantly more areas that generated high distortion in the real-Universe galaxy than there were in the model. This would be the case if the distribution of dark matter were a bit more lumpy than the models would predict—the dark matter halos around galaxies were more compact than the models would predict.
This isn’t the first discrepancy of the sort we’ve seen. Dark matter models also predict that there should be more dwarf satellite galaxies around the Milky Way and that they should be more diffuse than they are. But if we were to adjust… |
Dot matrix printing
|This article needs additional citations for verification. (November 2012)|
|Part of a series on the|
|History of printing|
Dot matrix printing or impact matrix printing is a type of computer printing which uses a print head that moves back and forth, or in an up and down motion, on the page and prints by impact, striking an ink-soaked cloth ribbon against the paper, much like the print mechanism on a typewriter. However, unlike a typewriter or daisy wheel printer, letters are drawn out of a dot matrix, and thus, varied fonts and arbitrary graphics can be produced.
Each dot is produced by a tiny metal rod, also called a "wire" or "pin", which is driven forward by the power of a tiny electromagnet or solenoid, either directly or through small levers (pawls). Facing the ribbon and the paper is a small guide plate pierced with holes to serve as guides for the pins. This plate may be made of hard plastic or an artificial jewel such as sapphire or ruby, however, based on tens of thousands of units recycled by Ross Technologies over the past three decades, there appears to be an equal distribution of plate material between plastic and metal. No synthetic jewel materials have been found to date.
The portion of the printer containing the pins is called the print head. When running the printer, it generally prints one line of text at a time. There are two approaches to achieve this:
The common serial dot matrix printers use a horizontally moving print head. The print head can be thought of featuring a single vertical column of seven or more pins approximately the height of a character box. In reality, the pins are arranged in up to four vertically or/and horizontally slightly displaced columns in order to increase the dot density and print speed through interleaving without causing the pins to jam. Thereby, up to 48 pins can be used to form the characters of a line while the print head moves horizontally.
In a considerably different configuration, so called line dot matrix printers use a fixed print head almost as wide as the paper path utilizing a horizontal line of thousands of pins for printing. Sometimes two horizontally slightly displaced rows are used to improve the effective dot density through interleaving. While still line-oriented, these printers for the professional heavy-duty market effectively print a whole line at once while the paper moves forward below the print head.
The printing speed of serial dot matrix printers with moving heads varies from 50 to 550 cps. In contrast to this, line matrix printers are capable of printing much more than 1000 cps, resulting in a throughput of up to 800 pages/hour.
These machines can be highly durable. When they do wear out, it is generally due to ink invading the guide plate of the print head, causing grit to adhere to it; this grit slowly causes the channels in the guide plate to wear from circles into ovals or slots, providing less and less accurate guidance to the printing wires. Eventually, even with tungsten blocks and titanium pawls, the printing becomes too unclear to read, a common problem when users failed to maintain the printer with regular cleaning as outlined in most user manuals.
A variation on the dot matrix printer was the cross hammer dot printer, patented by Seikosha in 1982. The smooth cylindrical roller of a conventional printer was replaced by a spinning, fluted cylinder. The print head was a simple hammer, with a vertical projecting edge, operated by an electromagnet. Where the vertical edge of the hammer intersected the horizontal flute of the cylinder, compressing the paper and ribbon between them, a single dot was marked on the paper. Characters were built up of multiple dots.
The LA30 was a 30 character/second dot matrix printer introduced in 1970 by Digital Equipment Corporation of Maynard, Massachusetts. It printed 80 columns of uppercase-only 5×7 dot matrix characters across a unique-sized paper. The printhead was driven by a stepper motor and the paper was advanced by a somewhat-unreliable and definitely noisy solenoid ratchet drive. The LA30 was available with both a parallel interface and a serial interface; however, the serial LA30 required the use of fill characters during the carriage-return
The LA30 was followed in 1974 by the LA36, which achieved far greater commercial success, becoming for a time the standard dot matrix computer terminal. The LA36 used the same print head as the LA30 but could print on forms of any width up to 132 columns of mixed-case output on standard green bar fanfold paper. The carriage was moved by a much-more-capable servo drive using a DC electric motor and an optical encoder / tachometer. The paper was moved by a stepper motor. The LA36 was only available with a serial interface but unlike the earlier LA30, no fill characters were required. This was possible because, while the printer never communicated at faster than 30 characters per second, the mechanism was actually capable of printing at 60 characters per second. During the carriage return period, characters were buffered for subsequent printing at full speed during a catch-up period. The two-tone buzz produced by 60 character-per-second catch-up printing followed by 30 character-per-second ordinary printing was a distinctive feature of the LA36 quickly copied by many other manufacturers well into the 90's. Most efficient dot matrix printers used this buffering technique.
Digital then broadened the basic LA36 line onto a wide variety of dot matrix printers including:
- LA180: 180 c/s line printer
- LS120: 120 c/s terminal
- LA120: 180 c/s advanced terminal
- LA34: Cost-reduced terminal
- LA38: An LA34 with more features
- LA12: A portable terminal
In 1970, Centronics (then of Hudson, New Hampshire) introduced a dot matrix printer, the Centronics 101. The search for a reliable printer mechanism led it to develop a relationship with Brother Industries, Ltd of Japan, and the sale of Centronics-badged Brother printer mechanisms equipped with a Centronics print head and Centronics electronics. Unlike Digital, Centronics concentrated on the low-end line printer marketplace with their distinctive units. In the process, they designed the parallel electrical interface that was to become standard on most printers until it began to be replaced by the Universal Serial Bus (USB) in the late 1990s.
Printer head positioning
The printer head is attached to a metal bar that ensures correct alignment, but horizontal positioning is controlled by a band that attaches to sprockets on two wheels at each side which is then driven with an electric motor. This band may be made of stainless steel, phosphor bronze or beryllium copper alloys, nylon or various synthetic materials with a twisted nylon core to prevent stretching. Actual position can be found out either by dead count using a stepper motor, rotary encoder attached to one wheel or a transparent plastic band with markings that is read by an optical sensor on the printer head (common on inkjets).
In the 1970s and 1980s, dot matrix impact printers were generally considered the best combination of expense and versatility, and until the 1990s they were by far the most common form of printer used with personal and home computers.
The Epson MX-80, introduced in 1979, was the groundbreaking model that sparked the initial popularity of impact printers in the personal computer market. The MX-80 combined affordability with good-quality text output (for its time). Early impact printers (including the MX) were notoriously loud during operation, a result of the hammer-like mechanism in the print head. The MX-80 even inspired the name of a noise rock band. The MX-80's low dot density (60 dpi horizontal, 72 dpi vertical) produced printouts of a distinctive "computerized" quality. When compared to the crisp typewriter quality of a daisy-wheel printer, the dot-matrix printer's legibility appeared especially bad. In office applications, output quality was a serious issue, as the dot-matrix text's readability would rapidly degrade with each photocopy generation. IBM sold the MX-80 as IBM 5125.
Initially, third-party software (such as the Bradford printer enhancement program) offered a quick fix to the quality issue. The software utilized a variety of software techniques to increase print quality; general strategies were doublestrike (print each line twice), and double-density mode (slow the print head to allow denser and more precise dot placement). Such add-on software was inconvenient to use, because it required the user to remember to run the enhancement program before each printer session (to activate the enhancement mode). Furthermore, not all enhancement software was compatible with all programs.
Early personal computer software focused on the processing of text, but as graphics displays became ubiquitous throughout the personal computer world, users wanted to print both text and images. Ironically, whereas the daisy-wheel printer and pen-plotter struggled to reproduce bitmap images, the first dot-matrix impact printers (including the MX-80) lacked the ability to print graphics. Yet the dot-matrix print head was well-suited to this task, and the capability, referred to as "dot-addressable" quickly became a standard feature on all dot-matrix printers intended for the personal and home computer markets. In 1981, Epson offered a retrofit EPROM kit called Graftrax to add the capability to many early MX series printers. Banners and signs produced with software that used this ability, such as Broderbund's Print Shop, became ubiquitous in offices and schools throughout the 1980s.
Progressive hardware improvements to impact printers boosted the carriage speed, added more (typeface) font options, increased the dot density (from 60 dpi up to 240 dpi), and added pseudo-color printing. Faster carriage speeds meant faster (and sometimes louder) printing. Additional typefaces allowed the user to vary the text appearance of printouts. Proportional-spaced fonts allowed the printer to imitate the non-uniform character widths of a typesetter. Increased dot density allowed for more detailed, darker printouts. The impact pins of the printhead were constrained to a minimum size (for structural durability), and dot densities above 100 dpi merely caused adjacent dots to overlap. While the pin diameter placed a lower limit on the smallest reproducible graphic detail, manufacturers were able to use higher dot density to great effect in improving text quality.
Several dot-matrix impact printers (such as the Epson FX series) offered 'user-downloadable fonts'. This gave the user the flexibility to print with different typefaces. PC software uploaded a user-defined fontset into the printer's memory, replacing the built-in typeface with the user's selection. Any subsequent text printout would use the downloaded font, until the printer was powered off or soft-reset. Several third-party programs were developed to allow easier management of this capability. With a supported word-processor program (such as WordPerfect 5.1), the user could embed up to 2 NLQ custom typefaces in addition to the printer's built-in (ROM) typefaces. (The later rise of WYSIWYG software philosophy rendered downloaded fonts obsolete.)
Single-strike and Multi-strike ribbons were an attempt to address issues in the ribbon's ink quality. Standard printer ribbons used the same principles as typewriter ribbons. The printer would be at its darkest with a newly installed ribbon cartridge, but would gradually grow fainter with each successive printout. The variation in darkness over the ribbon cartridge's lifetime prompted the introduction of alternative ribbon formulations. Single-strike ribbons used a carbon-like substance in typewriter ribbons transfer. As the ribbon was only usable for a single loop (rated in terms of 'character count'), the blackness was of consistent, outstanding darkness. Multi-strike ribbons gave an increase in ribbon life, at the expense of quality.
The high quality of single-strike ribbons had two side effects:
- At least 50% and up to 99.9% of the given ribbon surface would be wasted per character, since an entire fresh new region of ribbon was needed to print even the smallest font shapes. Ribbon advance was fixed to always span the largest character shape, so a row of periods would consume as much fresh ribbon as a row of W's, with a large span of unused carbon between each dot.
- Single-strike ribbons created a risk of espionage and loss of privacy, because the used ribbon reel could be unwound to reveal everything that had been printed. Secure disposal was required by shredding, melting, or burning of used ribbon cartridges to prevent recovery of information from garbage bins.
Several manufacturers implemented color dot-matrix impact printing through a multi-color ribbon. Color was achieved through a multi-pass composite printing process. During each pass, the print head struck a different section of the ribbon (one primary color). For a 4-color ribbon, each printed line of output required a total of 4 passes. In some color printers, such as the Apple ImageWriter II, the printer moved the ribbon relative to the fixed print head assembly. In other models, the print head was tilted against a stationary ribbon.
Due to their poor color quality and increased operating expense, color impact models never replaced their monochrome counterparts. As the color ribbon was used in the printer, the black ink section would gradually contaminate the other 3 colors, changing the consistency of printouts over the life of the ribbon. Hence, the color dot-matrix was suitable for abstract illustrations and piecharts, but not for photo-realistic reproduction. Dot-matrix thermal-transfer printers offered more consistent color quality, but consumed printer film, still more expensive. Color printing in the home would only become ubiquitous much later, with the ink-jet printer.The speed is usually 30-550 cps
Near Letter Quality (NLQ)
Text quality was a recurring issue with dot-matrix printers. Near Letter Quality mode—informally specified as almost good enough to be used in a business letter—endowed dot-matrix printers with a simulated typewriter-like quality. By using multiple passes of the carriage, and higher dot density, the printer could increase the effective resolution. For example, the Epson FX-86 could achieve a theoretical addressable dot-grid of 240 by 216 dots/inch using a print head with a vertical dot density of only 72 dots/inch, by making multiple passes of the print head for each line. For 240 by 144 dots/inch, the print head would make one pass, printing 240 by 72 dots/inch, then the printer would advance the paper by half of the vertical dot pitch (1/144 inch), then the print head would make a second pass. For 240 by 216 dots/inch, the print head would make three passes with smaller paper movement (1/3 vertical dot pitch, or 1/216 inch) between the passes. To cut hardware costs, some manufacturers merely used a double strike (doubly printing each line) to increase the printed text's boldness, resulting in bolder but still jagged text. In all cases, NLQ mode incurred a severe speed penalty. Not surprisingly, all printers retained one or more 'draft' modes for high-speed printing.
NLQ became a standard feature on all dot-matrix printers. While NLQ was well received in the IBM PC market, the Apple Macintosh market did not use NLQ mode at all, as it did not rely on the printer's own fonts. Mac word-processing applications used fonts stored in the computer. For non-PostScript (raster) printers, the final raster image was produced by the computer and sent to the printer, which meant dot-matrix printers on the Mac platform exclusively used raster ("graphics") printing mode. For near-letter-quality output, the Mac would simply double the resolution used by the printer, to 144 dpi, and use a screen font twice the point size desired. Since the Mac's screen resolution (72 dpi) was exactly half of the ImageWriter's maximum, this worked perfectly, creating text at exactly the desired size.
Due to the extremely precise alignment required for dot alignment between NLQ passes, typically the paper needed to be held somewhat taut in the tractor feed sprockets, and the continuous paper stack must be perfectly aligned behind or below the printer. Loosely held paper or skewed supply paper could cause misalignments between passes, rendering the NLQ text illegible.
By the mid-1980s, manufacturers had increased the pincount of the impact printhead from 7, 8, 9 or 12 pins to 18, 24, 27 or 48, with 24 pins being most common. The increased pin-count permitted superior print-quality which was necessary for success in Asian markets to print legible CJK characters. In the PC market, nearly all 9-pin printers printed at a de facto-standard vertical pitch of 9/72 inch (per printhead pass, i.e. 8 lpi). Epson's 24-pin LQ-series rose to become the new de facto standard, at 24/180 inch (per pass - 7.5 lpi). Not only could a 24-pin printer lay down a denser dot-pattern in a single-pass, it could simultaneously cover a larger area.
Compared to the older 9-pin models, a new 24-pin impact printer not only produced better-looking NLQ text, it printed the page more quickly (largely due to the 24-pin's ability to print NLQ with a single pass). 24-pin printers repeated this feat in bitmap graphics mode, producing higher-quality graphics in reduced time. While the text-quality of a 24-pin was still visibly inferior to a true letter-quality printer—the daisy wheel or laser-printer, the typical 24-pin impact printer printed more quickly than most daisy-wheel models.
As manufacturing costs declined, 24-pin printers gradually replaced 9-pin printers. Twenty-four pin printers reached a dot-density of 360×360 dpi, a marketing figure aimed at potential buyers of competing ink-jet and laser-printers. 24-pin NLQ fonts generally used a dot-density of 360x180, the highest allowable with single-pass printing. Multipass NLQ was abandoned, as most manufacturers felt the marginal quality improvement did not justify the tradeoff in speed. Most 24-pin printers offered 2 or more NLQ typefaces, but the rise of WYSIWYG software and GUI environments such as Microsoft Windows ended the usefulness of NLQ.
|This section does not cite any references or sources. (May 2012)|
The desktop impact printer was gradually replaced by the inkjet printer. When Hewlett-Packard's patents expired on steam-propelled photolithographically produced ink-jet heads,[when?] the inkjet mechanism became available to the printer industry. For applications that did not require impact (e.g., carbon-copy printing), the inkjet was superior in nearly all respects: comparatively quiet operation, faster print speed, and output quality almost as good as a laser printer. By the mid-1990s, inkjet technology had surpassed dot-matrix in the mainstream market.
As of 2005, dot matrix impact technology remains in use in devices such as cash registers, ATMs, fire alarm systems, and many other point-of-sales terminals. Thermal printing is gradually supplanting them in these applications. Full-size dot-matrix impact printers are still used to print multi-part stationery, for example at bank tellers and auto repair shops, and other applications where use of tractor feed paper is desirable such as data logging and aviation. Some are even fitted with USB interfaces as standard to aid connection to modern computers without legacy ports. Dot matrix printers are also more tolerant of the hot and dirty operating conditions found in many industrial settings. The simplicity and durability of the design, as well as its similarity to older typewriter technology, allows users who are not "computer literate" to easily perform routine tasks such as changing ribbons and correcting paper jams.
One often overlooked application for dot-matrix printers is in the field of IT security. Various system and server activity logs are typically stored on the local filesystem, where a remote attacker - having achieved their primary goals - can then alter or delete the contents of the logs, in an attempt to "cover their tracks" or otherwise thwart the efforts of system administrators and security experts. However, if the log entries are simultaneously output to a printer, line-by-line, a local hard-copy record of system activity is created - and this cannot be remotely altered or otherwise manipulated. Dot-matrix printers are ideal for this task, as they can sequentially print each log entry, one entry at a time, as they are added to the log. The usual dot-matrix printer support for continuous stationery also prevents incriminating pages from being surreptitiously removed or altered without evidence of tampering.
Some companies, such as Printek, DASCOM, WeP Peripherals, Epson, Okidata, Olivetti, Lexmark, and TallyGenicom still produce serial printers. Printronix is now the only manufacturer of line printers. Today, a new dot matrix printer actually costs more than most inkjet printers and some entry level laser printers. However, not much should be read into this price difference as the printing costs for inkjet and laser printers are a great deal higher than for dot matrix printers, and the inkjet/laser printer manufacturers effectively use their monopoly over arbitrarily priced printer cartridges to subsidize the initial cost of the printer itself. Dot matrix ribbons are a commodity and are not monopolized by the printer manufacturers themselves.
Advantages and disadvantages
|This section does not cite any references or sources. (May 2010)|
Dot matrix printers, like any impact printer, can print on multi-part stationery or make carbon-copies. Impact printers have one of the lowest printing costs per page. As the ink is running out, the printout gradually fades rather than suddenly stopping partway through a job. They are able to use continuous paper rather than requiring individual sheets, making them useful for data logging. They are good, reliable workhorses ideal for use in situations where low printing cost is more important than quality. The ink ribbon also does not easily dry out, including both the ribbon stored in the casing as well as the portion that is stretched in front of the print head; this unique property allows the dot-matrix printer to be used in environments where printer duty can be rare, for instance, as with a Fire Alarm Control Panel's output.
Impact printers create noise when the pins or typeface strike the ribbon to the paper. Sound-damping enclosures may have to be used in quiet environments. They can only print lower-resolution graphics, with limited color performance, limited quality, and lower speeds compared to non-impact printers. While they support fanfold paper with tractor holes well, single-sheet paper may have to be wound in and aligned by hand, which is relatively time-consuming, or a sheet feeder may be utilized which can have a lower paper feed reliability. When printing labels on release paper, they are prone to paper jams when a print wire snags the leading edge of the label while printing at its very edge. For text-only labels (e.g., mailing labels), a daisy wheel printer or band printer may offer better print quality and a lower risk of damaging the paper.
The advantages are: low purchase cost, can handle multipart forms, cheap to operate, only needs fresh ribbons, rugged, low repair cost and the ability to print on continuous paper. This makes it possible to print long banners that span across several sheets of paper.
The disadvantages are: noisy, low resolution (you can see the dots making up each character), not all can do color, color looks faded and streaky, slowness and more prone to jamming - with jams that are more difficult to clear. This is because paper is fed in using two sprockets engaging with holes in the paper. A small tear on the side of a sheet can cause a jam, with paper debris that is tedious to remove.
- Character matrix printer
- Daisy wheel printing
- Dye-sublimation printer
- Typeball printer
- Line printer
- Printer (computing)
- Thermal printer
- IBM Proprinter
- "United States Patent 4194846". 1980-03-25. Retrieved 2009-07-16.
- Patent US4462705 Cross hammer dot printer, Google Patents, accessed 2013-10-01
- "MX-80 SOUND".
- Dot Matrix, InfoWorld Jul 28, 1986.
- High speed, near letter quality dot matrix printers Popular Science Dec 1983.
- "Panasonic KX-P2123. (dot-matrix printer) (Hardware Review) (Evaluation)".
|Wikimedia Commons has media related to Dot matrix printers.| |
OverviewTeaching: 30 min
Exercises: 0 minQuestions
How can my programs do different things based on data values?Objectives
Write conditional statements including
Correctly evaluate expressions containing
In our last lesson, we discovered something suspicious was going on in our inflammation data by drawing some plots. How can we use Python to automatically recognize the different features we saw, and take a different action for each? In this lesson, we’ll learn how to write code that runs only when certain conditions are true.
We can ask Python to take different actions, depending on a condition, with an
num = 37 if num > 100: print('greater') else: print('not greater') print('done')
not greater done
The second line of this code uses the keyword
if to tell Python that we want to make a choice.
If the test that follows the
if statement is true,
the body of the
(i.e., the lines indented underneath it) are executed.
If the test is false,
the body of the
else is executed instead.
Only one or the other is ever executed:
Conditional statements don’t have to include an
If there isn’t one,
Python simply does nothing if the test is false:
num = 53 print('before conditional...') if num > 100: print('53 is greater than 100') print('...after conditional')
before conditional... ...after conditional
We can also chain several tests together using
which is short for “else if”.
The following Python code uses
elif to print the sign of a number.
num = -3 if num > 0: print(num, "is positive") elif num == 0: print(num, "is zero") else: print(num, "is negative")
"-3 is negative"
One important thing to notice in the code above is that we use a double equals sign
== to test for equality
rather than a single equals sign
because the latter is used to mean assignment.
We can also combine tests using
and is only true if both parts are true:
if (1 > 0) and (-1 > 0): print('both parts are true') else: print('at least one part is false')
at least one part is false
or is true if at least one part is true:
if (1 < 0) or (-1 < 0): print('at least one test is true')
at least one test is true
Now that we’ve seen how conditionals work,
we can use them to check for the suspicious features we saw in our inflammation data.
In the first couple of plots, the maximum inflammation per day
seemed to rise like a straight line, one unit per day.
We can check for this inside the
for loop we wrote with the following conditional:
if numpy.max(data, axis=0) == 0 and numpy.max(data, axis=0) == 20: print('Suspicious looking maxima!')
We also saw a different problem in the third dataset;
the minima per day were all zero (looks like a healthy person snuck into our study).
We can also check for this with an
elif numpy.sum(numpy.min(data, axis=0)) == 0: print('Minima add up to zero!')
And if neither of these conditions are true, we can use
else to give the all-clear:
else: print('Seems OK!')
Let’s test that out:
data = numpy.loadtxt(fname='inflammation-01.csv', delimiter=',') if numpy.max(data, axis=0) == 0 and numpy.max(data, axis=0) == 20: print('Suspicious looking maxima!') elif numpy.sum(numpy.min(data, axis=0)) == 0: print('Minima add up to zero!') else: print('Seems OK!')
Suspicious looking maxima!
data = numpy.loadtxt(fname='inflammation-03.csv', delimiter=',') if numpy.max(data, axis=0) == 0 and numpy.max(data, axis=0) == 20: print('Suspicious looking maxima!') elif numpy.sum(numpy.min(data, axis=0)) == 0: print('Minima add up to zero!') else: print('Seems OK!')
Minima add up to zero!
In this way,
we have asked Python to do something different depending on the condition of our data.
Here we printed messages in all cases,
but we could also imagine not using the
so that messages are only printed when something is wrong,
freeing us from having to manually examine every plot for features we’ve seen before.
How Many Paths?
Which of the following would be printed if you were to run this code? Why did you pick this answer?
- B and C
if 4 > 5: print('A') elif 4 == 5: print('B') elif 4 < 5: print('C')
C gets printed because the first two conditions,
4 > 5and
4 == 5, are not true, but
4 < 5is true.
What Is Truth?
Falseare special words in Python called
booleanswhich represent true and false statements. However, they aren’t the only values in Python that are true and false. In fact, any value can be used in an
elif. After reading and running the code below, explain what the rule is for which values are considered true and which are considered false.
if '': print('empty string is true') if 'word': print('word is true') if : print('empty list is true') if [1, 2, 3]: print('non-empty list is true') if 0: print('zero is true') if 1: print('one is true')
That’s Not Not What I Meant
Sometimes it is useful to check whether some condition is not true. The Boolean operator
notcan do this explicitly. After reading and running the code below, write some
ifstatements that use
notto test the rule that you formulated in the previous challenge.
if not '': print('empty string is not true') if not 'word': print('word is not true') if not not True: print('not not True is true')
Write some conditions that print
Trueif the variable
ais within 10% of the variable
Falseotherwise. Compare your implementation with your partner’s: do you get the same answer for all possible pairs of numbers?
a = 5 b = 5.1 if abs(a - b) < 0.1 * abs(b): print('True') else: print('False')
print(abs(a - b) < 0.1 * abs(b))
This works because the Booleans
Falsehave string representations which can be printed.
Python (and most other languages in the C family) provides in-place operators that work like this:
x = 1 # original value x += 1 # add one to x, assigning result back to x x *= 3 # multiply x by 3 print(x)
Write some code that sums the positive and negative numbers in a list separately, using in-place operators. Do you think the result is more or less readable than writing the same without in-place operators?
positive_sum = 0 negative_sum = 0 test_list = [3, 4, 6, 1, -1, -5, 0, 7, -8] for num in test_list: if num > 0: positive_sum += num elif num == 0: pass else: negative_sum += num print(positive_sum, negative_sum)
passmeans “don’t do anything”. In this particular case, it’s not actually needed, since if
num == 0neither sum needs to change, but it illustrates the use of
Sorting a List Into Buckets
The folder containing our data files has large data sets whose names start with “inflammation-“, small ones whose names with “small-“, and possibly other files whose sizes we don’t know. Our goal is to sort those files into three lists called
other_filesrespectively. Add code to the template below to do this. Note that the string method
Trueif and only if the string it is called on starts with the string passed as an argument.
files = ['inflammation-01.csv', 'myscript.py', 'inflammation-02.csv', 'small-01.csv', 'small-02.csv'] large_files = small_files = other_files =
Your solution should:
- loop over the names of the files
- figure out which group each filename belongs
- append the filename to that list
In the end the three lists should be:
large_files = ['inflammation-01.csv', 'inflammation-02.csv'] small_files = ['small-01.csv', 'small-02.csv'] other_files = ['myscript.py']
for file in files: if 'inflammation-' in file: large_files.append(file) elif 'small-' in file: small_files.append(file) else: other_files.append(file) print(large_files) print(small_files) print(other_files)
- Write a loop that counts the number of vowels in a character string.
- Test it on a few individual words and full sentences.
- Once you are done, compare your solution to your neighbor’s. Did you make the same decisions about how to handle the letter ‘y’ (which some people think is a vowel, and some do not)?
vowels = 'aeiouAEIOU' sentence = 'Mary had a little lamb.' count = 0 for char in sentence: if char in vowels: count += 1 print("The number of vowels in this string is " + str(count))
if conditionto start a conditional statement,
elif conditionto provide additional tests, and
elseto provide a default.
The bodies of the branches of conditional statements must be indented.
==to test for equality.
X and Yis only true if both X and Y are true.
X or Yis true if either X or Y, or both, are true.
Zero, the empty string, and the empty list are considered false; all other numbers, strings, and lists are considered true.
Nest loops to operate on multi-dimensional data.
Put code whose parameters change frequently in a function, then call it with different parameter values to customize its behavior. |
Use Real World Examples to Teach Sustainability
Real world examples demonstrate the complexity and unpredictability of real issues, and as such, can stimulate critical thinking. They also highlight the need for an inter- and multi-disciplinary approach to problem solving. Further, using examples from the real world demonstrates that, oftentimes, there is no perfect solution to a given problem. But, in doing so, gets students thinking about solutions, rather than just focusing on problems.
Multiple pedagogic strategies can be used to incorporate real examples into the classroom. These include teaching with case studies or with investigative cases, field experiences such as field labs or student research, and using local data and examples to teach about issues. Connecting local examples with global challenges can also be beneficial for expanding the context of larger scale issues (e.g. water quality and quantity could be both a local issue as well as a global issue) or those that are non-local, but may still affect students (e.g. drought in California affects local food prices).
Real world problems are inherently engaging since they tend to be meaningful and applicable to students' lives, either directly or indirectly (e.g. through the media or social networks). If you're not sure where to begin, the tips below can get you started. These tips were compiled from small group discussions among workshop participants at multiple InTeGrate workshops.
- Introduce students to your research - make it personal. It inspires students.
- Task students with bringing examples of real-world experiences and problems to the class.
- Bring experience into the classroom through guest speakers, engaging students in case studies, or field work
- Engage students in community work, such as service learning. Learn more about service learning.
- Bring in ethics (e.g. Hurricane Sandy preparedness and subsequent lawsuits): this makes connections between disciplines and is centered around current events. Ethics also broaches topics related to responsibilities: What are your responsibilities as a citizen, property owner, or professional?
- Develop empathy for others' life experience and point of view. Some strategies for building this perspective include sensory mapping, real or cyber ethnography, service or community based learning, literature and media assignments, role-playing and games that look at contrasting narrative, arc of story, point of view, and evolution through time. Reflection is an important tool and can provide a gradeable product in the form of a journal, paper, or exam/assignment question.
- Remember to maintain hope and agency in the face of long-lasting complex challenges related to sustainability. Studying success stories, people who have made a difference, and actions that give hope can be effective. There is a tension between maintaining hope, and understanding the full extent of how complex and deeply entrenched the problems are.
- While we all desire our students to become actors in making our civilization more environmentally just, there are a variety of strategies for approaching this in different instructional settings. They range from developing empathy and awareness to requiring students to engage in service or advocacy. In all cases, faculty should be careful not to dictate the students perspective or approach. The frame is to learn how to act, not to be told to act in a certain way.
- There is a strong tension between educating and engaging students in Environmental Justice and respecting the affected communities. This requires attention, preparation and skill. Anthropologists and sociologists have experience with these issues that can be brought to bear. Listening, sensitivity to context, and reflexivity are essential. While we have expertise to offer, we must refrain from removing agency to ourselves.
- Having to make and negotiate decisions in a group takes patience, time, and skill, and is something that environmental justice communities have to do under exceptionally "high-stakes" problems.
As discussed above, there are many ways to incorporate examples into the classroom. Exploring case studies, using the local environment and data, and service learning are three popular strategies. Ideas for using case studies are presented below. For a more in-depth look at using the local environment and service learning, including examples for implementing each, see these pages on Using Local Examples and Data and Service Learning.
- Remember to consider your audience: local hazards might be more effective to consider and timeliness may be an issue (e.g. Loma Prieta, Mt. St. Helens may bring blank stares).
- Bring in professional reports. Where possible, incorporate more of the history of the project. Also, there are public domain reports that could be incorporated into instruction and activities.
- If teaching about mineral resources, look for case studies for mineral resources of geologic interest that have already been exploited. These are more likely to have data, geologic maps, etc. in the public domain and thus are more widely accessible. (E.g. Yerington batholith, Nevada).
- Utilize models such as sea level rise and other natural hazard risks common to an area (e.g. earthquakes, landslides, flooding) and have students assess risk and prepare management plans to address the risks. If assessing a local hazard, you could set up a community debriefing as a service learning opportunity.
- Tie it to life choices students will make in the future: have students pick a city where they would want to move and assess the risk of living there and the level of preparedness for the risks that exist.
- Explore a case study in depth, such as Explore real world
examples that can be
used as activity starters »
- Sea Level Rise in Fort Lauderdale, Florida case study,
- Southern California Earthquakes case study,
- Red tide and harmful algal blooms site guide, or
- Choose from a variety of real examples you can use in the classroom. Most of these examples lend themselves to discussion starters or role-play activities.
Opportunities to strengthen the use of real world examples
Utilize the opportunity to apply classroom knowledge to real, tangible problems. Below are some ideas to get you started, or see browseable collections of examples:
- Engage students in thinking about the natural hazards in their local environ, such as is done in this activity: Evaluating natural hazards data to assess the risk to your California home by Corrie Neighbors, UC Riverside. Take this a step further and have students think about preparing for natural hazard situations before crises occur, such as done in this activity: Developing a Multi-Hazard Mitigation Strategy by Rebekah Green, Western Washington University.
- Start a discussion or role play using a specific real world example, such as Sea Level Rise in Fort Lauderdale, Florida by Alana Edwards, Mary Beth Hartman, and Leonard Berry, Florida Atlantic University, Southern California Earthquakes, prepared by John Taber, and other examples, submitted by participants at the 2014 Teaching about Risk and Resilience workshop.
- Utilize visualization tools, data, and software to engage students in learning more about real examples. For example:
- Usefulness of Google Earth/Wikimapia as risk predictor and damage/ resilience assessment tools by Charlene Sharpe, Rutgers University.
- Using Google Earth to Measure Seacliff Erosion Rates by Alfred Hochstaedter, Monterey Peninsula College.
- Hurricane Tracks and Energy, by Lisa Gilbert (Williams College), Josh Galster (Montclair State University), Joan Ramage (Lehigh University), is part of a larger InTeGrate module Natural Hazards and Risks: Hurricanes.
- Engage students in topics related to environmental justice issues using activities such as:
- Mapping Environmental Justice: The Geography of Population and Pollution by Christopher Cusack.
- Using Media to Document Public Attitudes on Waste by Sya Buryn Kedzior.
- Hazardous Waste and Toxics: Real Data for Real Places by Richard Kujawa.
- Take a closer look at what it means to live sustainably in a world with finite resources. For example:
- Assessing Water Resource Demand in New York City, by Kyle M. Monahan, adapted from an original activity by Richard F. Bopp.
- Delve into the complexities of sustainability and the impacts of climate change, e.g.: Impact of climate change on endangered fish population in Pyramid Lake.
- Have students reflect on environmental stewardship and ethical considerations related to science and sustainability. For instance:
- Encountering Geoscience Issues in the Popular Press by Marian Buzon, University of Idaho
- Presenting Science to the Public: The ethics of communicating potential environmental impacts of industrial projects by Joy Branlund, Southwestern Illinois College.
- Case of GMOs in Environmental Cleanup by Daniel Vallero, Duke University.
Materials and Resources for Teaching with Real World Examples
See how other faculty are using real-world examples with these examples from a range of disciplines and learning environments.
- Real world example collection from Teaching about Risk and Resilience 2014 workshop participants.
- Environmental justice activities from Teaching about Environmental Justice 2013 workshop participants.
- Hazard event pages, from On the Cutting Edge, compile visualizations, activities, and resources to explore particular natural hazard events.
- Environmental justice Case studies from the University of Michigan.
- Environmental justice Native Case Studies from Evergreen.
- Teaching with Investigative Cases
- Using Socioscientific Issues
- Teaching with Current Research and Data site guide connects you to a variety of resources that incorporate data into the classroom. |
In orbital mechanics, we track the motion of particles through a Euclidean space. This means we need a frame of reference, also known as a reference frame, in which the motion is tracked. The frame of reference consists of a clock to count time and a non-rotating Cartesian coordinate system to track the \(x\), \(y\), and \(z\) position of the particle. We are going to assume that relativity is not important in this course, so a single universal clock is sufficient to specify the time for all Cartesian coordinate systems.
Types of Reference Frames¶
The two types of reference frames are:
Inertial Reference Frame¶
An inertial reference frame is one that is not accelerating. It may be moving at constant velocity, but there can be absolutely no acceleration, including rotation!
Therefore, in an inertial reference frame, an object obeys Newton’s First Law of Motion and its velocity remains constant unless an external force acts on it. Inertial reference frames are always our first choice if possible, because the laws of mechanics tend to take their simplest form in this frame.
In orbital mechanics, we usually define an inertial reference frame with respect to the fixed stars. Of course, the stars are not really fixed—our Sun orbits the center of the galaxy, as do other stars in the Milky Way, and other galaxies may be approaching or receding at some velocity.
However, on the scale of most orbital mechanics problems we have to deal with (on the order of a few days to a few years), assuming the stars are fixed is reasonable.
Non-Inertial Reference Frame¶
By contrast to the inertial reference frame, the non-inertial reference frame does accelerate. This gives rise to the so-called fictitious forces, such as the centrifugal force, Coriolis force, and others.
One common example of an accelerating reference frame that you may have seen in Physics is the idea of a ball attached to a string rotating around a point—for instance, spinning a ball on a string above your head. In this case, a reference frame attached to a point which is not spinning would be considered inertial. On the other hand, a reference frame attached to the ball would be rotating and thus accelerating. It would be a non-inertial reference frame (specifically, a rotating reference frame) and would need to include a “fictitious” centrifugal force to satisfy the equations of motion.
As another example, consider a ball moving on a frictionless, rotating plate. An observer in an inertial frame would see the ball move in a straight line, since the ball does not experience any forces. However, an observer rotating with the frame would see the ball follow a curved path, implying the existence of a force. Since the force is not present in the inertial frame, this is termed a fictitious force.
Earth Reference Frames¶
All reference frames are either inertial or non-inertial, and deciding which type of frame we want to work with is our first choice. The second choice we need to make is where the origin of the frame should be placed.
With respect to the Earth, we will define three separate reference frames:
For now, we will assume that the Earth is a sphere. We use the Earth here since most human spaceflight takes place near the Earth.
The Earth-Centered Inertial (ECI) frame is an inertial frame with the origin fixed to the center of the Earth, \(C\). This frame uses capital letters for the axes and unit vectors, as shown in Fig. 3.
In the ECI, the \(Z\) axis points towards the North pole and the \(X\)-\(Y\) plane is in the same plane as the equator. Since this is an inertial frame, it is fixed in place with respect to the celestial sphere, the stars surrounding the earth.
In the ECI, the \(X\) axis points towards the March equinox. The equinoxes are the points in space where the earth’s equatorial plane and its ecliptic plane intersect. The March equinox occurs when the sun crosses the equatorial plane from below. This currently happens in the constellation Pisces, although in antiquity this occurred in the constellation Aries (the ram). Thus, the March equinox is also called the First point of Aries.
The Earth-centered, Earth-fixed (ECEF) frame is a non-inertial frame, but with the origin still fixed at the center of the Earth. The main difference from the ECI is that the axes in the ECEF rotate at the same rate as the surface of the earth.
The ECEF uses lower case, primed letters for the axes and unit vectors. The \(z'\) axis points towards the North pole, and the \(x'\) axis intersects the equator and the prime meridian. Since the ECEF rotates with the earth, the \(x'\) axis always points through the equator and the prime meridian.
Every 24 hours, the ECEF and ECI are aligned. Thus, the angular distance between \(X\) and \(x'\) is \(\theta_G\), and \(\theta_G\) increases at the rate \(\Omega\), the rotation rate of the Earth such that there are 24 hours in the day.
The final coordinate system determines the position of a particle \(P\) moving arbitrarily above the surface of the Earth. This could be a person, car, airplane, or spacecraft.
The origin, \(O\), of this coordinate system is fixed to the particle. At a given instant, the position of this coordinate system can be determined relative to the ECEF frame by specifying the longitude angle (\(\Lambda\)) and the latitude angle (\(\phi\)), which are positive in the East and North directions respectively. These specify the \(x\) and \(y\) axes of a topocentric-horizon coordinate system, respectively.
The third direction, \(z\), is directly up from the surface of the Earth and is called the zenith. Note that the direction of “up” changes as you move over the surface of the sphere. |
Photons (from Greek φως, meaning light), in many atomic models in physics, are particles which transmit light. In other words, light is carried over space by photons. Photon is an elementary particle that is its own antiparticle. In quantum mechanics each photon has a characteristic quantum of energy that depends on frequency: A photon associated with light at a higher frequency will have more energy (and be associated with light at a shorter wavelength).
Photons have a rest mass of 0 (zero). However, Einstein's theory of relativity says that they do have a certain amount of momentum. Before the photon got its name, Einstein revived the proposal that light is separate pieces of energy (particles). These particles came to be known as photons.
Photons are fundamental particles. Although they can be created and destroyed, their lifetime is infinite.
A photon has a given frequency, which determines its color. Radio technology makes great use of frequency. Beyond the visible range, frequency is less discussed, for example it is little used in distinguishing between X-Ray photons and infrared. Frequency is equivalent to the quantum energy of the photon, as related by the Planck constant equation,
where is the photon's energy, is the Plank constant, and is the frequency of the light associated with the photon. This frequency, , is typically measured in cycles per second, or equivalently, in Hz. The quantum energy of different photons is used often used in cameras, and other machines that use visible and higher than visible radiation. This because these photons are energetic enough to ionize atoms.
where (lambda) is the wavelength, or length of the wave (typically measured in meters.)
Another important property of a photon is its polarity. If you saw a giant photon coming straight at you, it could appear as a swath whipping vertically, horizontally, or somewhere in between. Polarized sunglasses stop photons swinging up and down from passing. This is how they reduce glare as light bouncing off of surfaces tend to fly that way. Liquid crystal displays also use polarity to control which light passes through. Some animals can see light polarization.
Photon interactions with matterEdit
Light is often created or absorbed when and electron gains or loses energy. This energy can be in the form of heat, kinetic energy, or other form. For example, an incandescent light bulb uses heat. The increase of energy can push an electron up one level in a shell called a "valence". This makes it unstable, and like everything, it wants to be in the lowest energy state. (If being in the lowest energy state is confusing, pick up a pencil and drop it. Once on the ground, the pencil will be in a lower energy state). When the electron drops back down to a lower energy state, it needs to release the energy that hit it, and it must obey the conservation of energy (energy can neither be created nor destroyed). Electrons release this energy as photons, and at higher intensities, this photon can be seen as visible light.
Photons and the electromagnetic forceEdit
In particle physics, photons are responsible for electromagnetic force. Electromagnetism is an idea that combines electricity with magnetism. One common way that we experience electromagnetism in our daily lives is light, which is caused by electromagnetism. Electromagnetism is also responsible for charge, which is the reason that you can not push your hand through a table. Since photons are the force-carrying particle of electromagnetism, they are also gauge bosons. Some matter–called dark matter–is not believed to be affected by electromagnetism. This would mean that dark matter does not have a charge, and does not give off light. |
The Michelson interferometer is a common configuration for optical interferometry and was invented by Albert Abraham Michelson. Using a beamsplitter, a light source is split into two arms. Each of those is reflected back toward the beamsplitter which then combines their amplitudes interferometrically. The resulting interference pattern that is not directed back toward the source is typically directed to some type of photoelectric detector or camera. Depending on the interferometer's particular application, the two paths may be of different lengths or include optical materials or components under test.
The Michelson interferometer (among other interferometer configurations) is employed in many scientific experiments and became well known for its use by Albert Michelson and Edward Morley in the famous Michelson-Morley experiment (1887) in a configuration which would have detected the earth's motion through the supposed luminiferous aether that most physicists at the time believed was the medium in which light waves propagated. The null result of that experiment essentially disproved the existence of such an aether, leading eventually to the special theory of relativity and the revolution in physics at the beginning of the twentieth century. In 2016, another application of the Michelson interferometer, LIGO, made the first direct detection of gravitational waves. That observation confirmed an important prediction of general relativity, validating the theory's prediction of space-time distortion in the context of large scale cosmic events (known as strong field tests).
- 1 Configuration
- 2 Source bandwidth
- 3 Applications
- 4 See also
- 5 Notes
- 6 References
- 7 External links
A Michelson interferometer consists minimally of mirrors M1 & M2 and a beam splitter M. In Fig 2, a source S emits light that hits the beam splitter (in this case, a plate beamsplitter) surface M at point C. M is partially reflective, so part of the light is transmitted through to point B while some is reflected in the direction of A. Both beams recombine at point C' to produce an interference pattern incident on the detector at point E (or on the retina of a person's eye). If there is a slight angle between the two returning beams, for instance, then an imaging detector will record a sinusoidal fringe pattern as shown in Fig. 3b. If there is perfect spatial alignment between the returning beams, then there will not be any such pattern but rather a constant intensity over the beam dependent on the differential pathlength; this is difficult, requiring very precise control of the beam paths.
Fig. 2 shows use of a coherent (laser) source. Narrowband spectral light from a discharge or even white light can also be used, however to obtain significant interference contrast it is required that the differential pathlength is reduced below the coherence length of the light source. That can be only micrometers for white light, as discussed below.
If a lossless beamsplitter is employed, then one can show that optical energy is conserved. At every point on the interference pattern, the power that is not directed to the detector at E is rather present in a beam (not shown) returning in the direction of the source.
As shown in Fig. 3a and 3b, the observer has a direct view of mirror M1 seen through the beam splitter, and sees a reflected image M'2 of mirror M2. The fringes can be interpreted as the result of interference between light coming from the two virtual images S'1 and S'2 of the original source S. The characteristics of the interference pattern depend on the nature of the light source and the precise orientation of the mirrors and beam splitter. In Fig. 3a, the optical elements are oriented so that S'1 and S'2 are in line with the observer, and the resulting interference pattern consists of circles centered on the normal to M1 and M'2 (fringes of equal inclination). If, as in Fig. 3b, M1 and M'2 are tilted with respect to each other, the interference fringes will generally take the shape of conic sections (hyperbolas), but if M1 and M'2 overlap, the fringes near the axis will be straight, parallel, and equally spaced (fringes of equal thickness). If S is an extended source rather than a point source as illustrated, the fringes of Fig. 3a must be observed with a telescope set at infinity, while the fringes of Fig. 3b will be localized on the mirrors.:17
White light has a tiny coherence length and is difficult to use in a Michelson (or Mach-Zehnder) interferometer. Even a narrowband (or "quasi-monochromatic") spectral source requires careful attention to issues of chromatic dispersion when used to illuminate an interferometer. The two optical paths must be practically equal for all wavelengths present in the source. This requirement can be met if both light paths cross an equal thickness of glass of the same dispersion. In Fig. 4a, the horizontal beam crosses the beam splitter three times, while the vertical beam crosses the beam splitter once. To equalize the dispersion, a so-called compensating plate identical to the substrate of the beam splitter may be inserted into the path of the vertical beam.:16 In Fig. 4b, we see using a cube beam splitter already equalizes the pathlengths in glass. The requirement for dispersion equalization is eliminated by using extremely narrowband light from a laser.
The extent of the fringes depends on the coherence length of the source. In Fig. 3b, the yellow sodium light used for the fringe illustration consists of a pair of closely spaced lines, D1 and D2, implying that the interference pattern will blur after several hundred fringes. Single longitudinal mode lasers are highly coherent and can produce high contrast interference with differential pathlengths of millions or even billions of wavelengths. On the other hand, using white (broadband) light, the central fringe is sharp, but away from the central fringe the fringes are colored and rapidly become indistinct to the eye.
Early experimentalists attempting to detect the earth's velocity relative to the supposed luminiferous aether, such as Michelson and Morley (1887) and Miller (1933), used quasi-monochromatic light only for initial alignment and coarse path equalization of the interferometer. Thereafter they switched to white (broadband) light, since using white light interferometry they could measure the point of absolute phase equalization (rather than phase modulo 2π), thus setting the two arms' pathlengths equal.[note 1][note 2] More importantly, in a white light interferometer, any subsequent "fringe jump" (differential pathlength shift of one wavelength) would always be detected.
The Michelson interferometer configuration is used in a number of different applications.
Fourier transform spectrometer
Fig. 5 illustrates the operation of a Fourier transform spectrometer, which is essentially a Michelson interferometer with one mirror movable. (A practical Fourier transform spectrometer would substitute corner cube reflectors for the flat mirrors of the conventional Michelson interferometer, but for simplicity, the illustration does not show this.) An interferogram is generated by making measurements of the signal at many discrete positions of the moving mirror. A Fourier transform converts the interferogram into an actual spectrum. Fourier transform spectrometers offer significant advantages over dispersive (i.e. grating and prism) spectrometers. (1) The Michelson interferometer's detector in effect monitors all wavelengths simultaneously throughout the entire measurement. When using a noisy detector, such as at infrared wavelengths, this offers an increase in signal to noise ratio while using only a single detector element; (2) the interferometer does not require a limited aperture as do grating or prism spectrometers, which require the incoming light to pass through a narrow slit in order to achieve high spectral resolution. This is an advantage when the incoming light is not of a single spatial mode. For more information, see Fellgett's advantage.
The Twyman-Green interferometer is a variation of the Michelson interferometer used to test small optical components, invented and patented by Twyman and Green in 1916. The basic characteristics distinguishing it from the Michelson configuration are the use of a monochromatic point light source and a collimator. It is interesting to note that Michelson (1918) criticized the Twyman-Green configuration as being unsuitable for the testing of large optical components, since the available light sources had limited coherence length. Michelson pointed out that constraints on geometry forced by the limited coherence length required the use of a reference mirror of equal size to the test mirror, making the Twyman-Green impractical for many purposes. Decades later, the advent of laser light sources answered Michelson's objections.
The use of a figured reference mirror in one arm allows the Twyman-Green interferometer to be used for testing various forms of optical component, such as lenses or telescope mirrors. Fig. 6 illustrates a Twyman-Green interferometer set up to test a lens. A point source of monochromatic light is expanded by a diverging lens (not shown), then is collimated into a parallel beam. A convex spherical mirror is positioned so that its center of curvature coincides with the focus of the lens being tested. The emergent beam is recorded by an imaging system for analysis.
Laser unequal path interferometer
The "LUPI" is a Twyman-Green interferometer that uses a coherent laser light source. The high coherence length of a laser allows unequal path lengths in the test and reference arms and permits economical use of the Twyman-Green configuration in testing large optical components.
This is a Michelson interferometer in which the mirror in one arm is replaced with a Gires–Tournois etalon. The highly dispersed wave reflected by the Gires–Tournois etalon interferes with the original wave as reflected by the other mirror. Because the phase change from the Gires–Tournois etalon is an almost step-like function of wavelength, the resulting interferometer has special characteristics. It has an application in fiber-optic communications as an optical interleaver.
Both mirrors in a Michelson interferometer can be replaced with Gires–Tournois etalons. The step-like relation of phase to wavelength is thereby more pronounced, and this can be used to construct an asymmetric optical interleaver.
Gravitational wave detection
Michelson interferometry is one leading method for the direct detection of gravitational waves. This involves detecting tiny strains in space itself, affecting two long arms of the interferometer unequally, due to a strong passing gravitational wave. In 2015 the first detection of gravitational waves was accomplished using the LIGO instrument, a Michelson interferometer with 4 km arms. This was the first experimental validation of gravitational waves, predicted by Albert Einstein's General Theory of Relativity. An even larger Michelson interferometer in space, to achieve greater sensitivity, is in the planning stages.
Fig. 7 illustrates use of a Michelson interferometer as a tunable narrow band filter to create dopplergrams of the Sun's surface. When used as a tunable narrow band filter, Michelson interferometers exhibit a number of advantages and disadvantages when compared with competing technologies such as Fabry–Pérot interferometers or Lyot filters. Michelson interferometers have the largest field of view for a specified wavelength, and are relatively simple in operation, since tuning is via mechanical rotation of waveplates rather than via high voltage control of piezoelectric crystals or lithium niobate optical modulators as used in a Fabry–Pérot system. Compared with Lyot filters, which use birefringent elements, Michelson interferometers have a relatively low temperature sensitivity. On the negative side, Michelson interferometers have a relatively restricted wavelength range, and require use of prefilters which restrict transmittance. The reliability of Michelson interferometers has tended to favor their use in space applications, while the broad wavelength range and overall simplicity of Fabry–Pérot interferometers has favored their use in ground-based systems.
Another application of the Michelson Interferometer is in optical coherence tomography (OCT), a medical imaging technique using low-coherence interferometry to provide tomographic visualization of internal tissue microstructures. As seen in Fig. 8, the core of a typical OCT system is a Michelson interferometer. One interferometer arm is focused onto the tissue sample and scans the sample in an X-Y longitudinal raster pattern. The other interferometer arm is bounced off a reference mirror. Reflected light from the tissue sample is combined with reflected light from the reference. Because of the low coherence of the light source, interferometric signal is observed only over a limited depth of sample. X-Y scanning therefore records one thin optical slice of the sample at a time. By performing multiple scans, moving the reference mirror between each scan, an entire three-dimensional image of the tissue can be reconstructed. Recent advances have striven to combine the nanometer phase retrieval of coherent interferometry with the ranging capability of low-coherence interferometry.
Atmospheric and Space Applications
The Michelson Interferometer has played an important role in studies of the upper atmosphere, revealing temperatures and winds, employing both space-borne, and ground-based instruments, by measuring the Doppler widths and shifts in the spectra of airglow and aurora. For example, the Wind Imaging Interferometer, WINDII, on the Upper Atmosphere Research Satellite, UARS, (launched on September 12, 1991) measured the global wind and temperature patterns from 80 to 300 km by using the visible airglow emission from these altitudes as a target and employing optical Doppler interferometry to measure the small wavelength shifts of the narrow atomic and molecular airglow emission lines induced by the bulk velocity of the atmosphere carrying the emitting species. The instrument was an all-glass field-widened achromatically and thermally compensated phase-stepping Michelson interferometer, along with a bare CCD detector that imaged the airglow limb through the interferometer. A sequence of phase-stepped images was processed to derive the wind velocity for two orthogonal view directions, yielding the horizontal wind vector.
The principle of using a polarizing Michelson Interferometer as a narrow band filter was first described by Evans who developed a birefringent photometer where the incoming light is split into two orthogonally polarized components by a polarizing beam splitter, sandwiched between two halves of a Michelson cube. This led to the first polarizing wide-field Michelson interferometer described by Title and Ramsey which was used for solar observations; and led to the development of a refined instrument applied to measurements of oscillations in the sun's atmosphere, employing a network of observatories around the Earth known as the Global Oscillations Network Group (GONG).
The Polarizing Atmospheric Michelson Interferometer, PAMI, developed by Bird et al., and discussed in Spectral Imaging of the Atmosphere, combines the polarization tuning technique of Title and Ramsey with the Shepherd et al. technique of deriving winds and temperatures from emission rate measurements at sequential path differences, but the scanning system used by PAMI is much simpler than the moving mirror systems in that it has no internal moving parts, instead scanning with a polarizer external to the interferometer. The PAMI was demonstrated in an observation campaign where its performance was compared to a Fabry-Perot spectrometer, and employed to measure E-region winds.
More recently, the Helioseismic and Magnetic Imager (HMI), on the Solar Dynamics Observatory, employs two Michelson Interferometers with a polarizer and other tunable elements, to study solar variability and to characterize the Sun's interior along with the various components of magnetic activity. HMI takes high-resolution measurements of the longitudinal and vector magnetic field over the entire visible disk thus extending the capabilities of its predecessor, the SOHO's MDI instrument (See Fig. 9). HMI produces data to determine the interior sources and mechanisms of solar variability and how the physical processes inside the Sun are related to surface magnetic field and activity. It also produces data to enable estimates of the coronal magnetic field for studies of variability in the extended solar atmosphere. HMI observations will help establish the relationships between the internal dynamics and magnetic activity in order to understand solar variability and its effects.
In one example of the use of the MDI, Stanford scientists reported the detection of several sunspot regions in the deep interior of the Sun, 1–2 days before they appeared on the solar disc. The detection of sunspots in the solar interior may thus provide valuable warnings about upcoming surface magnetic activity which could be used to improve and extend the predictions of space weather forecasts.
|Wikimedia Commons has media related to Michelson interferometer.|
- List of types of interferometers
- LIGO Laser Interferometer Gravitational-Wave Observatory
- Michelson (1881) wrote, "... when they [the fringes using sodium light] were of convenient width and of maximum sharpness, the sodium flame was removed and the lamp again substituted. The screw m was then slowly turned till the bands reappeared. They were then of course colored, except the central band, which was nearly black."
- Shankland (1964) wrote concerning the 1881 experiment, p. 20: "The interference fringes were found by first using a sodium light source and after adjustment for maximum visibility, the source was changed to white light and the colored fringes then located. White-light fringes were employed to facilitate observation of shifts in position of the interference pattern." And concerning the 1887 experiment, p. 31: "With this new interferometer, the magnitude of the expected shift of the white-light interference pattern was 0.4 of a fringe as the instrument was rotated through an angle of 90° in the horizontal plane. (The corresponding shift in the Potsdam interferometer had been 0.04 fringe.)"
- Albert Michelson; Edward Morley (1887). "On the Relative Motion of the Earth and the Luminiferous Ether". American Journal of Science. 34 (203): 333–345. doi:10.2475/ajs.s3-34.203.333.
- Abbott, B. P.; et al. (LIGO Scientific Collaboration and Virgo Collaboration) (15 June 2016). "GW151226: Observation of Gravitational Waves from a 22-Solar-Mass Binary Black Hole Coalescence". Physical Review Letters. 116 (24): 241103. doi:10.1103/PhysRevLett.116.241103.
- Hariharan, P. (2007). Basics of Interferometry, Second Edition. Elsevier. ISBN 0-12-373589-0.
- Dayton C. Miller, "The Ether-Drift Experiment and the Determination of the Absolute Motion of the Earth," Rev. Mod. Phys., V5, N3, pp. 203-242 (Jul 1933).
- Michelson, A.A. (1881). "The Relative Motion of the Earth and the Luminiferous Ether". American Journal of Science. 22: 120–129. doi:10.2475/ajs.s3-22.128.120.
- Shankland, R.S. (1964). "Michelson–Morley experiment". American Journal of Physics. 31 (1): 16–35. Bibcode:1964AmJPh..32...16S. doi:10.1119/1.1970063.
- "Spectrometry by Fourier transform". OPI - Optique pour l'Ingénieur. Retrieved 3 April 2012.
- "Michelson Interferometer Operation". Block Engineering. Retrieved 26 April 2012.
- Michelson, A. A. (1918). "On the Correction of Optical Surfaces". Proceedings of the National Academy of Sciences of the United States of America. 4 (7): 210–212. Bibcode:1918PNAS....4..210M. doi:10.1073/pnas.4.7.210. PMC . PMID 16576300.
- Malacara, D. (2007). "Twyman–Green Interferometer". Optical Shop Testing. p. 46. doi:10.1002/9780470135976.ch2. ISBN 9780470135976.
- "Interferential Devices - Twyman-Green Interferometer". OPI - Optique pour l'Ingénieur. Retrieved 4 April 2012.
- F. Gires & P. Tournois (1964). "Interféromètre utilisable pour la compression d'impulsions lumineuses modulées en fréquence". Comptes Rendus de l'Académie des Sciences de Paris. 258: 6112–6115.
- Nature, "Dawn of a new astronomy", M. Coleman Miller, Vol 531, issue 7592, page 40, 3 March 2016
- The New York Times, "With Faint Chirp, Scientists Prove Einstein Correct", Dennis Overbye, February 12, 2016, page A1, New York
- Gary, G.A.; Balasubramaniam, K.S. "Additional Notes Concerning the Selection of a Multiple-Etalon System for ATST" (PDF). Advanced Technology Solar Telescope. Retrieved 29 April 2012.
- Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; et al. (1991). "Optical Coherence Tomography" (PDF). Science. 254 (5035): 1178–81. Bibcode:1991Sci...254.1178H. doi:10.1126/science.1957169. PMID 1957169. Retrieved 10 April 2012.
- Fercher, A.F. (1996). "Optical Coherence Tomography" (PDF). Journal of Biomedical Optics. 1 (2): 157–173. Bibcode:1996JBO.....1..157F. doi:10.1117/12.231361. Retrieved 10 April 2012.
- Olszak, A.G.; Schmit, J.; Heaton, M.G. "Interferometry: Technology and Applications" (PDF). Bruker. Retrieved 1 April 2012.
- Shepherd, G. G.; et al. (1993). "WINDII, the Wind Imaging Interferometer on the Upper Atmosphere Research Satellite". J. Geophys. Res. 98(D6): 10,725–10,750.
- Evans, J. W. (1947). "The birefringent filter". J. Opt. Soc. Am. 39 229.
- Title, A. M.; Ramsey, H. E. (1980). "Improvements in birefringent filters. 6: Analog birefringent elements". Appl. Opt. 19, p. 2046.
- Harvey, J.; et al. (1996). "The Global Oscillation Network Group (GONG) Project". Science. 272 (5266): 1284–1286. Bibcode:1996Sci...272.1284H. doi:10.1126/science.272.5266.1284.
- Bird, J.; et al. (1995). "A polarizing Michelson interferometer for measuring thermospheric winds". Meas. Sci. Technol. 6 (9): 1368–1378. Bibcode:1995MeScT...6.1368B. doi:10.1088/0957-0233/6/9/019.
- Shepherd, G. G. (2002). Spectral Imaging of the Atmosphere. Academic Press. ISBN 0-12-639481-4.
- Shepherd, G. G.; et al. (1985). "WAMDII: wide angle Michelson Doppler imaging interferometer for Spacelab". Appl. Opt. 24, p. 1571.
- Bird, J.; G. G. Shepherd; C. A. Tepley (1995). "Comparison of lower thermospheric winds measured by a Polarizing Michelson Interferometer and a Fabry-Perot spectrometer during the AIDA campaign". Journal of Atmospheric and Terrestrial Physics. 55 (3): 313–324. Bibcode:1993JATP...55..313B. doi:10.1016/0021-9169(93)90071-6.
- Dean Pesnell; Kevin Addison (5 February 2010). "SDO - Solar Dynamics Observatory: SDO Instruments". NASA. Retrieved 2010-02-13.
- Solar Physics Research Group. "Helioseismic and Magnetic Imager Investigation". Stanford University. Retrieved 2010-02-13.
- Ilonidis, S.; Zhao, J.; Kosovichev, A. (2011). "Detection of Emerging Sunspot Regions in the Solar Interior". Science. 333 (6045): 993–996. Bibcode:2011Sci...333..993I. doi:10.1126/science.1206253. PMID 21852494.
- Diagrams of Michelson interferometers
- Application of a step-phase interferometer in optical communication
- European Gravitational Observatory
- A satellite view of the VIRGO interferometer
- A free software, to simulate and understand the Michelson interferometer principles, made by students of Faculty of Engineering of the University of Porto |
Students can go through AP Board 9th Class Maths Notes Chapter 3 The Elements of Geometry to understand and remember the concepts easily.
AP State Board Syllabus 9th Class Maths Notes Chapter 3 The Elements of Geometry
→ Geometry is structured on its building blocks namely point, line and plane.
→ In geometry there are undefined terms like point, plane and line.
→ Angles, circles and triangles are the examples for defined terms.
→ No better entrance exists than Euclid’s time honoured ‘Elements’.
→ In ‘The Elements’, Euclid developed a new system of thought which laid the foundation for the advancement of the geometry.
→ Some of the Euclid’s axioms are:
- Things which are equal to same things are equal to one another.
- If equals are added to equals, the wholes are equal.
- If equals are subtracted from equals, the remainders are also equal.
- Things which coincide with one another are equal to one another.
- The whole is greater than part.
- Things which are double of the same things are equal to one another.
- Things which are halves of the same things are equal to one another.
→ Euclid’s postulates are
Postulate – 1 : To draw a straight line from any point to any point.
Postulate – 2 : A terminated line can be produced indefinitely.
Postulate – 3 : To describe a circle with any centre and radius.
Postulate – 4 : That all right angles are equal to one another.
Postulate – 5 : If a straight line falling on two straight lines makes the interior angles on the same side of it taken together is less than two right angles, then the two straight lines, if produced infinitely, meet on that side on which the sum of the angles is less than two right angles.
→ Equivalent versions of Euclid’s fifth postulate:
- Through a point not on a given line, exactly one parallel line may be drawn to the given line – John Play Fair (1748 – 1819).
- The sum of angles of any triangle is a constant and is equal to two right angles (Legendre).
- There exists a pair of lines everywhere equidistant from one another (Posidominus).
- If a straight line intersects any one of two parallel lines, then it will intersect the other also (Proclus).
- The statements that were proved to be true are called propositions or theorems.
- The statements neither proved nor disproved are called conjectures.
- There are non-Euclidian geometries. |
In geometry, we give different names to different types of angles depending on the measure of the given angle: right angles, adjacent supplementary angles, vertical angles. But how do we know which is which?
Let’s start with the basics.
If the measure of the angle is exactly 90 degrees, it’s known as a right angle. If an angle is less than 90 degrees, it’s an acute angle. An angle greater than 90 degrees is an obtuse angle. If the measure of an angle is equal to 180 degrees, it’s known as a straight angle.
There are also names given to pairs of angles.
Vertical angles, or opposite angles, are the two angles directly opposite each other when two straight lines cross (Figure 1).
Complementary angles are two angles that add to 90 degrees (Figure 2).
Supplementary angles are two angles that add up to 180 degrees (Figure 3).
Adjacent Angles vs. Nonadjacent Angles
Angles are adjacent when they share a common side and a common vertex.
Angles 1 and 2 are nonadjacent, while angles 3 and 4 are — they share a common side and vertex.
Adjacent Supplementary Angles Defined
Now that we understand the definitions of adjacent and nonadjacent angles, we can see that adjacent supplementary angles are two angles that share a side and vertex and add up to 180 degrees.
Using this definition, look at the diagram below to see which angles are adjacent supplementary.
Angles ABC and ABD are adjacent because they share line segment AB and vertex B.
Angles EFG and HIJ are not adjacent because they don’t share any common side.
However, the angle measures in both pairs of supplementary angles (ABC and ABD, and EFG and HIJ) still equal 180 degrees.
To review, that means EFG and HIJ are supplementary angles. However, only ABC and ABD are adjacent supplementary angles.
Recognizing Adjacent Supplementary Angles
To recap, adjacent supplementary angles don’t just share a side and vertex but they also add up to 180 degrees. These angles commonly show up in geometry proofs, so if you’re not sure, look for a straight line intersected by another line segment with the two angles sharing a common side and vertex. |
|Part of a series on|
Linear motion (also called rectilinear motion) is a one-dimensional motion along a straight line, and can therefore be described mathematically using only one spatial dimension. The linear motion can be of two types: uniform linear motion with constant velocity or zero acceleration; non uniform linear motion with variable velocity or non-zero acceleration. The motion of a particle (a point-like object) along a line can be described by its position , which varies with (time). An example of linear motion is an athlete running 100m along a straight track.
Linear motion is the most basic of all motion. According to Newton's first law of motion, objects that do not experience any net force will continue to move in a straight line with a constant velocity until they are subjected to a net force. Under everyday circumstances, external forces such as gravity and friction can cause an object to change the direction of its motion, so that its motion cannot be described as linear.
One may compare linear motion to general motion. In general motion, a particle's position and velocity are described by vectors, which have a magnitude and direction. In linear motion, the directions of all the vectors describing the system are equal and constant which means the objects move along the same axis and do not change direction. The analysis of such systems may therefore be simplified by neglecting the direction components of the vectors involved and dealing only with the magnitude.
Neglecting the rotation and other motions of the Earth, an example of linear motion is the ball thrown straight up and falling back straight down.
The motion in which all the particles of a body move through the same distance in the same time is called translatory motion. There are two types of translatory motions: rectilinear motion; curvilinear motion. Since linear motion is a motion in a single dimension, the distance traveled by an object in particular direction is the same as displacement. The SI unit of displacement is the metre. If is the initial position of an object and is the final position, then mathematically the displacement is given by:
The equivalent of displacement in rotational motion is the angular displacement measured in radian. The displacement of an object cannot be greater than the distance because it is also a distance but the shortest one. Consider a person travelling to work daily. Overall displacement when he returns home is zero, since the person ends up back where he started, but the distance travelled is clearly not zero.
Velocity refers to a displacement in one direction with respect to an interval of time. It is defined as the rate of change of displacement over change in time. Velocity is a vectorial quantity, representing a direction and a magnitude of movement. The magnitude of a velocity is called speed. The SI unit of speed is that is metre per second.
- is the time at which the object was at position and
- is the time at which the object was at position
The magnitude of the average velocity is called an average speed.
In contrast to an average velocity, referring to the overall motion in a finite time interval, the instantaneous velocity of an object describes the state of motion at a specific point in time. It is defined by letting the length of the time interval tend to zero, that is, the velocity is the time derivative of the displacement as function of time.
The magnitude of the instantaneous velocity is called the instantaneous speed.
Acceleration is defined as the rate of change of velocity with respect to time. Acceleration is the second derivative of displacement i.e. acceleration can be found by differentiating position with respect to time twice or differentiating velocity with respect to time once. The SI unit of acceleration is or metre per second squared.
If is the average acceleration and is the average velocity over the time interval then mathematically,
The instantaneous acceleration is the limit of the ratio and as approaches zero i.e.,
The rate of change of acceleration, the third derivative of displacement is known as jerk. The SI unit of jerk is . In the UK jerk is also known as jolt.
The rate of change of jerk, the fourth derivative of displacement is known as jounce. The SI unit of jounce is which can be pronounced as metres per quartic second.
Equations of kinematics
is the initial velocity
is the final velocity
is the acceleration
is the displacement
is the time
These relationships can be demonstrated graphically. The gradient of a line on a displacement time graph represents the velocity. The gradient of the velocity time graph gives the acceleration while the area under the velocity time graph gives the displacement. The area under an acceleration time graph gives the change in velocity.
Analogy between linear and rotational motion
The following table refers to rotation of a rigid body about a fixed axis: is arclength, is the distance from the axis to any point, and is the tangential acceleration, which is the component of the acceleration that is parallel to the motion. In contrast, the centripetal acceleration, , is perpendicular to the motion. The component of the force parallel to the motion, or equivalently, perpendicular to the line connecting the point of application to the axis is . The sum is over particles and/or points of application.
|Linear motion||Rotational motion||Defining equation|
|Displacement =||Angular displacement =|
|Velocity =||Angular velocity =|
|Acceleration =||Angular acceleration =|
|Mass =||Moment of Inertia =|
|Force =||Torque =|
|Kinetic energy =||Kinetic energy =|
The following table shows the analogy in derived SI units:
- Angular motion
- Centripetal force
- Inertial frame of reference
- Linear actuator
- Linear bearing
- Mechanics of planar particle motion
- Motion graphs and derivatives
- Reciprocating motion
- Rectilinear propagation
- Uniformly accelerated linear motion.
- Resnick, Robert and Halliday, David (1966), Physics, Section 3-4
- "Basic principles for understanding sport mechanics".
- "Motion Control Resource Info Center". Retrieved 19 January 2011.
- "Distance and Displacement".
- "SI Units".
- "SI Units".
- "Speed & Velocity".
- "Average speed and average velocity".
- "Average Velocity, Straight Line".
- "Acceleration". Archived from the original on 2011-08-08.
- "What is the term used for the third derivative of position?".
- "Equations of motion" (PDF).
- "Description of Motion in One Dimension".
- "What is derivatives of displacement?".
- "Linear Motion vs Rotational motion" (PDF).
- Resnick, Robert and Halliday, David (1966), Physics, Chapter 3 (Vol I and II, Combined edition), Wiley International Edition, Library of Congress Catalog Card No. 66-11527
- Tipler P.A., Mosca G., "Physics for Scientists and Engineers", Chapter 2 (5th edition), W. H. Freeman and company: New York and Basing stoke, 2003.
Media related to Linear movement at Wikimedia Commons |
Free Trade vs. Fair Trade: Teaching NAFTA 19 Years Later
If we can’t logically explain why life isn’t fair, or why nothing in life is free, then how do we educate kids on the complex issues that makeup NAFTA?
This best practices guide is a resource for teachers and parents tasked with educating students on NAFTA-related concepts, global relations and other corresponding issues. The point is to show students both negative and positive perspectives on NAFTA.
Ratified on December 8th, 1993 and enacted January 1st, 1994, the tripartite treaty between the US, Canada and Mexico was designed to open up the North American borders and create a free trade region that would benefit all parties. NAFTA was a controversial topic during the initial negotiations and remains so today.
Opponents then feared that the agreement would eliminate jobs and threaten the environment, among other issues. NAFTA supporters contended that the agreement would create new jobs, spark economic growth, and improve environmental conditions and living standards in all three countries.
The Logic: Making Participating Countries More Competitive
The primary motive behind NAFTA was to invigorate economic growth in North America and boost conditions for fair competition. The free trade model promised each country market advantage, as the market itself would determine the producers and the consumers.
In this scenario, already-developed U.S. and Canada would buy tariff-free, goods from lesser-developed Mexico who could manufacture them cheaper. Economically unburdened by the elimination of tariffs and import quotas, U.S. and Canada could focus more on producing high-tech goods and innovation and invest in a more highly developed infrastructure.
In turn, Mexico would also benefit from these free trade relations. As the term ‘free trade’ specifies, Mexico would receive globalization in the form of investment and technology from Canada and the US. In this manner, Mexico would eventually realize success along the lines of its North American counterparts.
NAFTA includes impartial rules-based dispute mechanisms to safeguard the fairness and stability required for the agreement to function. When a dispute first arises, NAFTA asks the concerned parties to try to resolve their disagreements through committees. If no solution is agreed upon, they must follow the mechanisms defined by the NAFTA Secretariat, the tripartite body in charge of NAFTA-related disputes.
Guide for Teaching NAFTA
The consequences of NAFTA over the last decade are convoluted. More specifically, economists, politicians, unions, consumer advocacy groups and citizens interpret the outcomes differently. For some, the free trade agreement was very unfair, indeed.
The following issues will all serve as good starting points for a discussion, essay assignment, or group presentation. Point out the paradoxes and ambiguities and have students discrete and research the issues that interest them most.
Member countries did benefit from a multi-trillion-dollar cumulative gross domestic product a recent New York Times article indicates. The same article states that, “The pact has benefited all three members.” However, other sources argue that Mexico was unable to utilize this benefit because of the country’s political instability and internal economic policies.
NAFTA is said to have effectively deindustrialized the U.S. because many manufacturing jobs are displaced to Mexico. Interestingly, NAFTA has also created millions of jobs in the US. Other sources like NAFTANOW.ORG point out that in 2008, U.S. manufacturing exports, reached an all-time high of US$1.0 trillion.
It’s worth noting that the U.S. expanded the maquiladora program so that they could take advantage of cheap Mexican labor for export to the US. While this effort increased employment in Mexico, it had a negative effect on working conditions. Maquiladora workers had no rights, health or pension benefits and were often required to work 12 or more hour days.
Another expectation on behalf of the NAFTA members was that by creating more Mexican jobs, immigration from Mexico to the United States would substantially decrease. This has not been the case, with over 500,000 Mexican immigrants entering the U.S. every year.
Increased Dependency on Imports
Tariffs and quotas are used by nations worldwide to protect themselves against foreign competition. Some parties assert that while industrialized countries, like the U.S., benefited from freeing the market by making it easier and cheaper to export their products and services, doing so still had unfavorable consequences on developing countries like Mexico.
The free market, they argue, crushed agricultural nations depending on quotas to secure food and living conditions for their rural populations. Without tariffs these countries lose their main income source as well as access to their own food production. Suddenly poorer countries are dependent on imports that they can’t afford from the U.S., Canada, and other industrialized nations.
Define Globalization- Have students write an essay in which they define the concept and then share their ideas in a classroom discussion. Analyze their answers in a group. If the students show that they are very engaged and opinionated, break the group in two. One will argue the pros, the other the cons of globalization. Begin or conclude by presenting an overview of the concept to guide and inform students’ understandings.
Consumer Knowledge- Ask each student to select a favorite personal item as the basis for a six minute presentation. They will research the object and identify where the item was made. As preparation for their presentation, have each student locate the country on a map. They will research the country, identifying its primary exports and imports, education, economy, living conditions and so on. Have them outline potential positive and negative aspects of their findings. Ask how the knowledge they’ve gained would affect their decision to buy the product again. The presenter will also provide one or two methods for increasing consumer knowledge.
NAFTA Preamble- Provide copies of the Trade Preamble to each student. First, divide the classroom into groups of four to discuss the pros and cons of the treaty. Ask each group to share with the classroom their list of pros and cons. Write a pro/con chart on the board and record the contributions. Discuss as a larger group.
NAFTA Member Debate- Divide the classroom into three groups. Assign each group a NAFTA country. Have them begin by brainstorming their country’s position(s) on the agreement. Each group will identify key topics to cover such as the economy, employment rates, and so on. Then have them divide the research responsibilities among themselves. Once the groups are fully prepared, orchestrate a classroom debate.
What about Canada and the Environment?- One must dig deep to find resource material on Canada’s part in NAFTA. Depending on the grade level you are working with, challenge students to write an essay, give a presentation or lead a discussion on Canada’s role in the agreement, along with how they have or haven’t benefited.
When looking for material on the environment, direct them to NAFTANOW.ORG’s Myths vs. Reality and the Commission For Environmental Cooperation as starting points. Most importantly, emphasize that NAFTA, like other political issues throughout history, is not transparent but complex and dynamic. What matters is to gain knowledge on the issues and develop an empowering awareness that will translate to better decision making and relationships in their lives.
Additional Resources for Teachers, Parents and their Students
Accessible resources for facts on NAFTA:
World Savvy Monitor on:Economy and Trade (NAFTA) (focus on Mexico)
Globalexchange.org on Food and Fair Trade:Food Security, Farming, and the WTO and CAFTA
“The Children of NAFTA: Labor Wars on the U.S./Mexican Border” by David Bacon
“NAFTA Revisited: Achievements and Challenges” by Gary Clyde Hufbauer and Jeffrey J. Schott |
The term micro-g environment (also µg, often referred to by the term microgravity) is more or less a synonym of weightlessness and zero-G, but indicates that g-forces are not quite zero, just very small. The symbol for microgravity, µg, was used on the insignias of Space Shuttle flights STS-87 and STS-107, because these flights were devoted to microgravity research in Low Earth orbit.
Absence of gravity
A "stationary" micro-g environment would require travelling far enough into deep space so as to reduce the effect of gravity by attenuation to almost zero. This is the simplest in conception, but requires traveling an enormous distance, rendering it most impractical. For example, to reduce the gravity of the Earth by a factor of one million, one needs to be at a distance of 6 million km from the Earth, but to reduce the gravity of the Sun to this amount one has to be at a distance of 3700 million km. (The gravity due to the rest of the Milky Way is already smaller than one millionth of the gravity on Earth, so we do not need to move away further from its center). Thus it is not impossible, but it has only been achieved so far by four interstellar probes (Voyager 1 and 2, part of the Voyager program, Pioneer 10 and 11 part of the Pioneer program) and they did not return to Earth. To reduce the gravity to one thousandth of that on Earth one needs to be at a distance of 200,000 km.
|Location||Gravity due to||Total|
|Earth||Sun||rest of Milky Way|
|Earth's surface||9.81 m/s2||6 mm/s2||200 pm/s2 = 6 mm/s/yr||9.81 m/s2|
|Low Earth orbit||9 m/s2||6 mm/s2||200 pm/s2||9 m/s2|
|200,000 km from Earth||10 mm/s2||6 mm/s2||200 pm/s2||up to 12 mm/s2|
|6 million km from Earth||10 μm/s2||6 mm/s2||200 pm/s2||6 mm/s2|
|3700 million km from Earth||29 pm/s2||10 μm/s2||200 pm/s2||10 μm/s2|
|Voyager 1 (17,000 million km from Earth)||1 pm/s2||500 nm/s2||200 pm/s2||500 nm/s2|
|0.1 light-year from Earth||400 am/s2||200 pm/s2||200 pm/s2||up to 400 pm/s2|
From stationarity the gravity from "the rest of the Milky Way" would cause a free fall, covering a distance of 100 pm in one second, 360 nm in one minute, 1.3 mm in one hour, 70 cm in one day, 37 m in one week, 100 km in one year, and 10,000 km in 10 years (at a speed at that last location of 6 cm/s).
At a distance relatively close to Earth (less than 3000 km), gravity is only slightly reduced; however, the effects and overall force of gravity are significantly reduced to the point where people feel weightless. Gravity is still attracting objects towards the Earth, but because the objects are typically moving at such immense speeds, gravity cannot pull the object with enough force to make them enter Earth's atmosphere. It simply pulls the objects around the Earth, keeping it in orbit. This is how, and why, satellites float around the Earth; they are kept from floating away aimlessly by the force of gravity but not with enough force to counter their speed and have them come falling down towards Earth. This is also why when in space, it appears that everything is floating, because everything is being pulled towards Earth at the same speed, all objects are in free fall.
Compare the gravitational potential at some of these locations.
What remains is a micro-g environment moving in free fall, i.e. there are no forces other than gravity acting on the people or objects in this environment. To prevent air drag making the free fall less perfect, objects and people can free-fall in a capsule that itself, while not necessarily itself in free fall, is accelerated as in free fall. This can be done by applying a force to compensate for air drag. Alternatively free fall can be carried out in space, or in a vacuum tower or shaft.
The two cases that can be distinguished are that where the situation is only temporary because after some time the Earth's surface is or would be reached, and the case where the situation can go on indefinitely.
A temporary micro-g environment exists in a drop tube (in a tower or shaft), a sub-orbital spaceflight, e.g. with a sounding rocket, and in an airplane such as used by NASA's Reduced Gravity Research Program, aka the Vomit Comet, and by the Zero Gravity Corporation. A temporary micro-g environment is applied for training of astronauts, for some experiments, for filming movies, and for fun.
A micro-g environment for an indefinite time, while also possible in a spaceship going to infinity in a parabolic or hyperbolic orbit, is most practical in an Earth orbit. This is the environment commonly experienced in the International Space Station, Space Shuttle, etc. While this scenario is the most suitable for scientific experimentation and commercial exploitation, it is still quite expensive to operate in, mostly due to launch costs.
Objects in orbit are not perfectly weightless due to several effects:
- Effects depending on relative position in the spacecraft:
- In Low Earth orbit (LEO), the force of gravity decreases upward by 0.33 μg/m. Objects which have a non-zero size will be subjected to a tidal force, or a differential pull, between the high and low ends of the object. (An extreme version of this effect is spaghettification.)
- In a spacecraft in LEO, the centrifugal force is greater on the side of the spacecraft furthest from the Earth. This is also a tidal force, adding 0.17 μg/m to the first-mentioned effect.
- "Floating" objects in a spacecraft in LEO are actually in independent orbits around the Earth. If two objects are placed side-by-side (relative to their direction of motion) they will be orbiting the Earth in different orbital planes. Since all orbital planes pass through the center of the earth, any two orbital planes intersect along a line. Therefore two objects placed side-by-side (at any distance apart) will come together after one quarter of a revolution. If they are placed so they miss each other, they will oscillate past each other, with the same period as the orbit. This corresponds to an inward acceleration of 0.17 μg per meter horizontal distance from the center. If they are placed one ahead of the other in the same orbital plane, they will maintain their separation. If they are placed one above the other (at different radii from the center of the earth) they will have different potential energies, so the size, eccentricity, and period of their orbits will be different, causing them to move in a complex looping pattern relative to each other.
- Gravity between the spacecraft and an object within it may make the object slowly "fall" toward a more massive part of it. The acceleration is 0.007 μg for 1000 kg at 1 m distance.
- Uniform effects (which could be compensated):
- Though very thin, there is some air at orbital altitudes of 185 to 1,000 km. This atmosphere causes deceleration due to friction. This could be compensated by a small continuous thrust, but in practice the deceleration is only compensated from time to time, so the small g-force of this effect is not eliminated.
- The effects of the solar wind and radiation pressure are similar, but directed away from the Sun. Unlike the effect of the atmosphere it does not reduce with altitude.
In a shot tower (now obsolete), molten metal (such as lead or steel), was dripped through a sieve into free fall. With sufficient height (several hundred feet), the metal would be solid enough to resist impact (usually in a water bath) at the bottom of the tower. While the shot may have been slightly deformed by its passage through the air and by impact at the bottom, this method produced metal spheres of sufficient roundness to be used directly in shotgun shells or to be refined by further processing for applications requiring higher accuracy.
High quality crystals
While not yet a commercial application, there has been interest in growing crystals in micro-g, as in a space station or automated artificial satellite, in an attempt to reduce crystal lattice defects. Such defect-free crystals may prove useful for certain microelectronic applications and also to produce crystals for subsequent X-ray crystallography.
- μFluids@Home — a distributed computing project that models the behavior of liquid rocket propellants in micro-g
- European Low Gravity Research Association
- "Space myths and misconceptions - space flight". OMNI 15 (7): 38ff. May 1993.
- Depending on distance, "stationary" is meant relative to Earth or the Sun.
- Zona K. (13 Feb 2009). Glenn Research Center [Online]. National Aeronautics and Space Administration,
- "Weightlessness and Microgravity", David Chandler, The Physics Teacher, May 1991, pp. 312-13
- "Growing Crystals in Zero-Gravity" News Article by Discovery
|Wikimedia Commons has media related to Weightlessness.|
- Overview of microgravity applications and methods
- Criticism of the terms "Zero Gravity" and "Microgravity", a persuasion to use terminology that reflects accurate physics (Sci.space post).
- Space Biology Research at AU-KBC Research Centre |
Chemical Equations Practice Worksheet Answers. To earn full credit, write the words. Balancing chemical equations worksheet grade 10 with answers 6.
2mg cl 2 mgcl 2 27. A balanced chemical equation is one in which. Grade 10 science unit 3 7.
Chemical, formulas, and, equations, worksheet,. Solid zn reacts with silver (i) chloride to produce zinc (ii) chloride and silver metal answer choices zn + agcl → zncl 2 + ag zncl 2 + ag → zn + agcl zncl2 + ag 2 → zn + agcl zn 2 cl + ag → zn + agcl question 2
Solid Zn Reacts With Silver (I) Chloride To Produce Zinc (Ii) Chloride And Silver Metal Answer Choices Zn + Agcl → Zncl 2 + Ag Zncl 2 + Ag → Zn + Agcl Zncl2 + Ag 2 → Zn + Agcl Zn 2 Cl + Ag → Zn + Agcl Question 2
Balancing chemical equation with substitution. Chemical equations practice answers worksheet answer key, balancing chemical equations practice worksheet with answers, chemical equation balance practice worksheet,. Remember, in a chemical reaction, the atoms/ions are simply.
A Balancing Chemical Equation Worksheet Is A Practice Booklet With Unsolved And Solved Chemical Equation Problems On Which Students Can Practice Their Balancing Skills.
6co 2 + 6h 2 o → c 6 h 12 o 6 + 6o 2 2. 1) balance the page 19/36. Rxn.1 describe a chemical reaction using words and symbolic equations.
This Kind Of Photograph (49 Balancing Chemical Equations Worksheets [With Answers]) Preceding Can Be Branded Having:
Sicl4 + h2o → h4sio4 + hcl the only element that occurs more than once on the same side of the equation here is hydrogen, so we can start with any other element. Grade 10 science unit 3 7. A balanced chemical equation is one in which.
Balancing Chemical Equations Worksheet Grade 10 With Answers 5.
What is a “balanced” chemical equation? Balancing chemical equations practice worksheet answers author: Balancing chemical equations worksheet grade 8 8.
Chemical Formula Writing Worksheet Solutions Write Chemical Formulas For The Compounds In Each Box.
Chemical equations worksheet answers pdf. Rxn.1 describe a chemical reaction using. A) aluminum metal reacts with iron (ii) oxide powder to produce aluminum oxide solid and iron metal. |
3. Long-Run Economic Growth
An improving standard of living depends on economic growth. To consume more we have to produce more. We will investigate three key questions in this chapter:
1. What determines the growth rate of output over time?
2. Why do poor countries remain poor and can they ever catch up?
3. Is there an optimal rate of economic growth?
We will attempt to address these questions through two complementary lines of research. First we develop a growth accounting approach that will let us identify what has contributed to historical growth. Second we will present two different macroeconomic growth models that try to explain differences in growth and income across countries and provide insight into the desired path of growth.
Economic growth and our standard of living typically are measured by the quantity of goods and services we consume. The best available economic measure of quantity is real GDP. Real GDP eliminates the effects of inflation. A country's nominal GDP may be growing at 20 percent a year, but if its inflation rate is 30 percent a year then its actual output or real GDP is in fact shrinking. Real GDP may not be a perfect indicator of our well-being because it ignores some of the unmeasured benefits and costs of our behavior that we discussed in the earlier chapter on GDP accounting. But it is the best indicator that has been consistently measured over time.
Economic growth is generally measured in one of two ways. First, when we look at two or more countries we can compare total or aggregate real GDP. This measure doesn't accomplish much more than showing that one economy is larger or smaller than another or that it is growing faster or slower. Aggregate real GDP is of limited use because it does not reveal whether the residents of a given country are better or worse off. One country's aggregate real GDP may be twice that of another, but if it has four times the population (or labor force) then each person produces only half as much. But size does matter when you are considering possible economies of scale (larger is often better) or resources (capital and labor) that are available to the economy.
The second measure of economic growth is real GDP per person or per worker or per labor hour. A per capita real GDP provides a measure of the average output of each person. If each person produces more (on average) then each person can presumably consume more and is better off. For example, Figure 3-1 presents trends in the economic growth of real GDP per worker of six industrialized countries over the last 50 years. Growth is measured as real GDP per worker to eliminate the effects of inflation and population growth. U.S. aggregate nominal GDP grew an average 7.3% per year between 1950 and 2000. Real GDP, which eliminates price inflation grew by an average 3.4% per year over this period. Part of the growth in real output is due to a growing population and labor force. Dividing real GDP by the employed labor force produces an average growth rate in real GDP per worker in the U.S. of 1.9% per year. Thus we can directly compare the well-being of the residents of each country and the growth in the productive capacity of the average worker of both very large (the U.S.) and small (Hong Kong) economies.
|Economic Growth - change in aggregate real GDP or average real GDP per person over time.
Standard of Living - the real value of the quantity of goods and services consumed by the average person, typically measured as average real GDP per person, per worker, or per family.
Japan started the second half of the 20th century with a real GDP per worker at less than one-fourth that of the United Kingdom. But an impressive real annual growth rate per worker of almost 5.6% per year allowed Japan to catch the U.K. by 1990. Japan's engine of growth ran out of steam in the last 10 years of the 20th century with an average growth rate of 1.0%. Hong Kong did even better than Japan, almost catching the United States by 1997 before stalling.
|Table 3-1. Economic Growth of Six Countries by Decade|
|Japan||Canada||Brazil|| Hong |
|Average 1950 - 1960||2.55%||1.94%||6.64%||1.96%||4.08%||n.a.|
|Average 1960 - 1970||2.30%||2.40%||8.94%||2.21%||4.08%||7.79%|
|Average 1970 - 1980||1.31%||1.41%||3.39%||1.07%||4.73%||5.24%|
|Average 1980 - 1990||2.17%||2.00%||3.34%||1.49%||-0.29%||4.69%|
|Average 1990 - 2000||1.87%||1.82%||1.00%||1.57%||1.24%||5.00%|
|Average 1950 - 2000||2.04%||1.91%||4.62%||1.66%||2.75%||n.a.|
|Note: Average growth rates are compounded annual average growth rates in real GDP per worker.|
Source: Penn World Tables (http://datacentre.chass.utoronto.ca/pwt/alphacountries.html).
While increases in the average standard of living from year to year may appear small, differences between generations can be great. Small changes build over time. For example, from 1950 to 2000 the difference between the growth rates of real GDP per worker for the United States and Canada was only 0.25% (Table 3-1). This small difference in growth rates led to a widening of the lead the U.S. had in real GDP per worker from about $2,000 per worker in 1950 to over $12,000 in 2000 (Figure 3-1).
Understanding why economies grow begins with the production function. The production function describes the relationship between the inputs of labor and capital and the output of goods and services. Growth in aggregate output is then explained by the following four factors:
The problem we face is how do we attribute the historical growth in aggregate output to changes in the quantities and qualities of capital and labor. Measuring the quantity of capital and the quantity of labor is not a serious problem. There is published data on the dollar value of installed capital and the number of labor hours worked. Measuring qualities, however, is a problem. Rather than try to identify and separate the quality of capital from the quality of labor, economists calculate a combined productivity index. The productivity index is referred to as multifactor productivity (also called total factor productivity), which represents output from the "factors" of production, capital and labor. Growth in multifactor productivity represents an increase in output that results from improvements in production processes, whether due to improvements in the quality of capital (such as from new technology) or improvements in the quality of labor (such as from better education or training), with the quantities of all inputs unchanged.
|Multifactor Productivity - productivity is a measure of economic efficiency which shows how effectively economic inputs are converted into output. Multifactor productivity is measured by comparing the amount of goods and services produced with the factors (e.g., capital and labor) used in production.|
We can explain historical growth in output in terms of changes in the quantity of labor, the quantity of capital, and multifactor productivity by starting with a general representation of the production function shown in equation (1). Production is a function of the economy's use of capital, K, labor, L, and a multifactor productivity index, A.
|Y = A f(K, L )||(1)|
We can convert equation (1) into a growth accounting equation (2), which relates the growth rate in aggregate output to the growth rates of capital, labor, and multifactor productivity (see Appendix A for details of the derivation).
|ΔY = ΔA + εK ΔK + εL ΔL
Y A K L
A growth rate is simply the amount of increase or decrease divided by the starting level. For example, let's say last year we produced 100 widgets and this year we produced 101 widgets. The increase in widget production (ΔY = 1) divided by last year's total production (Y = 100) equals a growth rate (ΔY / Y) of 0.01, or 1 percent (0.01 x 100).
The elasticity of output with respect to capital, εK, is the percent change in output that results from a 1% change in the level of capital with all other variables remaining unchanged. The elasticity of output with respect to labor, εL, is the percent change in output that results from a 1% change in the total labor force employed with all other variables remaining unchanged. For example, if the elasticity of output with respect to labor is 0.7, a 1% increase in the number of workers or labor hours will increase aggregate output by 0.7%.
If we know the elasticities of output with respect to capital and to labor we can empirically measure the relative importance of each of these sources of growth. Consider a simple example. Assume multifactor productivity, the level of real capital, and the labor force are all growing at 1% per year and the elasticity of output with respect to capital is 0.3 and the elasticity of output with respect to labor is 0.7. We can calculate the growth of aggregate output as ΔY/Y = 1% + (0.3 1%) + (0.7 1%) = 2%.
Now consider what happens if one of the three growth rates is 2% rather than 1%. Growth in multifactor productivity of 2% would boost aggregate output growth from 2% to 3%. Additional growth in the level of capital of 1% would add 0.3% to the output growth rate, and an additional 1% growth in the labor force would add 0.7% to output growth.
An empirical study is a test of a hypothesis or theory using actual data. For example, we may want to empirically test the hypothesis that the U.S. economy grows faster during the year before a Presidential election than other countries and grows slower in the year following. The empirical test would involve entering economic growth rates for selected countries in a spreadsheet and statistically comparing growth rates for those two groups of years.
Comparing growth rates across countries as in Table 3-1 is not a problem despite differences in currencies because growth rates are independent of the units of measurement. But when we compare levels of output or income across countries as in Figure 3-1 we run into the problem of differences in currencies. U.S. output is measured in dollars and Japanese output is measured in yen. The first step in comparing levels is to convert currencies using foreign exchange rates. Output in yen can then be expressed as output in dollars by multiplying by the dollar-yen exchange rate.
We still have one significant problem - differences in the cost of living. For example, if we express welfare in terms of income per worker the income in dollars per worker in Japan may be identical to that of the United States. However, if the cost of living in Japan is higher because of housing, food and other costs, this direct comparison of incomes does not reveal actual differences in purchasing power and the standard of living.
Let's compare two U.S. States, which avoids the small complication of converting different currencies using exchange rates. The median annual household income in Virginia in 2000 was $46,677, while that of Mississippi was $31,330. Does this mean the average household in Virginia was better off? Not necessarily because the median value of owner-occupied housing in 2000 was $125,000 in Virginia versus $71,400 in Mississippi. Incomes may be lower on average in Mississippi but the cost of living, at least in terms of housing, was also lower. Output and income levels must be corrected for purchasing power parity in order to make valid comparisons.
|Exchange Rate - The price of one currency in terms of another currency. For example, the exchange rate between the yen and the dollar may be 100 yen = $1.00. This means that you need to pay a price of 100 yen to get $1.00, or pay $1.00 in exchange for 100 yen. Exchange rates can be fixed or floating. Fixed means that they stay at the same value as set by the government. Floating means that they fluctuate day to day according to the market.
Purchasing Power Parity (PPP) - The PPP exchange rate is represents the quantities of money in each currency that would buy exactly the same basket of goods in both countries. For example, say a certain basket of goods costs 1,000 Yen in Japan, and the same basket costs $10 in the U.S.. The PPP rate would be 100 Yen = $1. The PPP exchange rate will often differ from the actual exchange rate.
International comparisons are made easier using data provided by Alan Heston, Robert Summers and Bettina Aten with the Center for International Comparisons at the University of Pennsylvania. The Penn World Tables (http://pwt.econ.upenn.edu/) provide real national income accounts converted to U.S dollars based on purchasing power parity for 179 countries for some or all of the years 1950-2000. The Organization for Economic Cooperation and Development (OECD) also lists PPP exchange rates for its member countries (Annual National Accounts for OECD Member Countries, http://www.oecd.org/).
The Japan-U.S. exchange rate makes an interesting study of the role of purchasing power parity. In 1980 the market exchange rate was 227 yen to the dollar. The PPP exchange rate (based on a basket of goods and services represented by GDP) was close to the actual at 231 yen to the dollar. The exchange rates were very different in 2000: the actual rate was 108 yen per dollar and the PPP exchange rate was 155 yen per dollar. While the yen had appreciated in value over those 20 years (it now takes fewer yen to buy one dollar), the smaller decline in the PPP exchange rate indicates that it has become relatively more costly to live in Japan, which has offset some of their gain in the actual exchange rate.
The growth accounting equation (2) gives us a foundation for evaluating historical economic growth and its causes. Empirical studies of economic growth generally follow three steps:
1. Determine the growth rates of aggregate output (ΔY/Y), labor (ΔL/L), and capital (ΔK/K) over some period of time.
2. Estimate the elasticities of output with respect to capital, εK, and to labor, εL. This is made somewhat easier if we assume markets are competitive, which implies that the elasticities are equal to income shares that are observable (see Appendix B). For the United States the elasticity of output with respect to capital has been about 0.3, and the elasticity of output with respect to labor about 0.7.
3. Calculate the contribution of the growth in capital (εK ΔK/K) and the growth in labor (εL ΔL/L) to the growth in output (ΔY/Y). The difference between the growth in aggregate output and the contributions from capital and labor is attributed to multifactor productivity change. Productivity change is treated as an unexplained residual.
Two well known studies that follow this growth accounting procedure are those of Robert Solow and Edward Denison, which are summarized in Table 3-2. One interesting result of these studies is the smaller contribution made by the growth in capital. Increasing aggregate output depends more on increases in multifactor productivity than on real capital. Denison suggested that almost two-thirds of the increase in multifactor productivity (0.66% of the 1.02%) was due to advances in knowledge. Improved resource allocation (e.g., movement of workers from farms to the cities) and economies of scale added 0.23% and 0.26% to output growth, respectively.
|Table 3-2. Solow and Denison Studies of U.S. Growth|
|percent change per year|
|Period Covered||1909 - 1949||1929 - 1982|
|Total Output, ΔY/Y||2.9||2.92|
|Capital Inputs, ΔK/K||0.32||0.56|
|Labor Inputs, ΔL/L||1.09||1.34|
|multifactor Productivity, ΔA/A||1.49||1.02|
|Sources: Robert Solow, "Technical Change and the Aggregate Production Function", Review of Economics and Statistics, August 1957;|
Edward F. Denison, Trends in American Economic Growth, The Brookings Institution, 1985.
We can take our own stab at estimating the growth in multifactor productivity using published data on real GDP, hours worked and the private capital stock. For example, Table 3-3 presents these data published by the Bureau of Economic Analysis (Dept. of Commerce) and Bureau of Labor Statistics (Dept. of Labor).
|Table 3-3. U.S. Growth Accounting|
|Output: Real GDP Quantity Index (2000=100)||65.958||105.749||2.99 %|
|Capital: Private Nonresidential Fixed Assets Index (2000=100)||69.663||105.714||2.64 %|
|Labor: Total Private Aggregate Weekly Hours Index (2002=100)||79.77||98.62||1.33 %|
|Sources: Real GDP from Bureau of Economic Analysis (BEA), National Income and Product Accounts, Table 1.1.3 (http://www.bea.gov/bea/dn/nipaweb/index.asp); Nonresidential fixed assets from BEA, Fixed Assets, Table 4.2 (http://www.bea.gov/bea/dn/FA2004/SelectTable.asp); and Total private aggregate weekly hours from Bureau of Labor Statistics, Current Employment Statistics Survey, Series CES0500000040 (http://www.bls.gov/ces/home.htm)|
Given the estimates for the elasticities of output with respect to capital (εK=0.3) and labor (εL=0.7) published by others (or we could calculate income shares as explained in the Appendix) we can easily calculate the growth in multifactor productivity:
|Real GDP growth rate = Productivity growth rate + εK fixed assets growth rate + εL labor growth rate|
|2.99 = Productivity growth rate + 0.3 2.64 + 0.7 1.33|
|Productivity growth rate = 2.99 - 0.79 - 0.93|
|Productivity growth rate = 1.27 % per year average|
We calculate an average growth rate of multifactor productivity of 1.27 percent per year. The Bureau of Labor Statistics (BLS) calculated average growth of multifactor productivity of 0.8 percent per year over this same period. Calculating multifactor productivity is not as simple as the calculation we provided above. Some corrections the BLS includes in their calculation are (Bureau of Labor Statistics, BLS Handbook of Methods, Chapter 10, http://www.bls.gov/opub/hom/homtoc.htm):
BLS excludes the following outputs from real GDP growth: general government, nonprofit institutions, paid employees of private households, and the rental value of owner-occupied dwellings.
Real stocks are constructed as vintage aggregates of historical investments (in real terms) in accordance with an "efficiency" or service flow concept (as distinct from a price or value concept). The efficiency of each asset is assumed to deteriorate only gradually during the early years of an asset's service life and then more quickly later in its life.
The hours of employees of government enterprises are excluded. Also, the hours at work for each of 1,008 types of workers classified by their educational attainment, work experience and gender are aggregated using an annually chained (Tornqvist) index. The growth rate of total labor is therefore a weighted average of the growth rates of each type of worker where the weight assigned to a type of worker is its share of total labor compensation. The resulting aggregate measure of labor input accounts for both the increase in raw hours at work and changes in the skill composition (as measured by education and work experience) of the work force.
This last correction, weighting labor hours by skill, is probably the most serious. Earlier we said that multifactor productivity includes improvements in labor such as from better education or training. However, productivity increases arising from improving work skills through training or education may not show up in the multifactor productivity statistic reported by BLS. Consequently the BLS estimate of multifactor productivity growth should be smaller than our quick calculation.
Growth accounting is useful for identifying what contributed to the growth of output. But growth accounting does not explain why capital and technology grow at the rates they do. Growth models attempt to explain why expansion of the capital stock and economic growth are related.
The first model we study is the neoclassical growth model pioneered by Robert Solow in the 1950s (Robert M. Solow, "A Contribution to the Theory of Economic Growth," Quarterly Journal of Economics, February 1956, 65-94). Solow won the Nobel Prize in economics in 1987 for his work on economic growth.
Production conditions (i.e., supply rather than demand) generally dominate growth models. Growth models focus on the long-run trend in output, more commonly called potential or full-employment output, rather than the short-run booms and busts in which an economy cycles around its long-term trend. This allows us to avoid questions of business cycles, government stabilization policy, and unemployment and focus on the longer run issues of saving and investment policy. In fact, government policies designed to stimulate demand during a recession may have negative consequences for long-run growth. For example, tax cuts may "crowd out" investment spending as we will explain in later chapters.
The foundation of the neoclassical growth model rests on two assumptions:
1. Exogenously determined (i.e., determined by some process outside the model) labor supply, which grows at some given rate, n. Growth models do not assume households have more or fewer children as a country becomes poorer or wealthier (although the rate of population increase is generally lower in wealthier nations).
2. Some form of production function is assumed. In the neoclassical growth model the production function is assumed to exhibit constant returns to scale. Constant returns to scale simply means that if the labor force grows at 2% per year and capital grows at 2% per year then output also grows at 2% per year.
We present the neoclassical growth model in two parts. First, in Section A we apply the two assumptions in an analysis of a steady state. The surprising implication of this model is that the long-run steady state aggregate economic growth rate depends only on the labor supply (population) growth rate and not on the level of capital available to labor. Then in Section B we add equations for savings and investment to evaluate why poor countries do not catch up to the rich countries and if there is an optimal rate of savings and investment (called the Solow "Golden Rule").
Neoclassical growth models identify a steady state where the rates of change in output, capital and labor are constant. A steady state is something like an equilibrium condition. If there is no shock to the economy then markets will maintain a constant rate of growth. The question we ask is what determines the rate of growth.
|Steady State - a condition of constant rates of growth in economic measures. With no technological change, a steady state is represented by identical constant growth rates in the labor force, total output, and the level of capital.|
First we start with the growth accounting equation and assume that technology is constant, i.e., there is no productivity growth. In other words, in equation (2) ΔA/A = 0. Thus, we can simplify equation (2) as follows:
|ΔY = εK ΔK + εL ΔL
Y K L
A steady state with no technological change implies that output per worker and the capital-labor ratio are constant. Capital must grow at the same rate as the labor force. If capital grows faster or slower than the quantity of labor then the economy is not in a steady state rate of growth. Another way of looking at it is if there is no technological change there should be no reason to increase or decrease the amount of capital every worker is given. New entrants to the labor force will be given a level of capital identical to all other workers.
If labor supply grows at some constant rate, n, then ΔL/L = n. Since the level of capital per worker is constant in this model of a steady state then capital must grow at the same rate as the labor force, or ΔK/K = n. Equation (3) can be rewritten:
|ΔY = εK n + εL n
Equation (3) doesn't tell us much because the elasticities of output with respect to capital, εK, and labor, εL, are unknown. We can give it some meaning by making a key assumption about the form of the production function, which determines the elasticities.
We assume the production function has constant returns to scale. Constant returns to scale simply means that if the labor force grows at 2% per year and capital grows at 2% per year (the capital-labor ratio in this steady state model is constant) then output also grows at 2% per year. Declining returns to sale in macroeconomic growth models implies that as the labor force grows (and the level of capital grows with it) a country would get poorer in terms of output (and real income) per worker. Total output would not increase as fast as the labor force. Thomas Malthus applied the concept of declining returns when he conjectured in 1798 (An Essay on the Principle of Population) that the world population would eventually outgrow the capability to produce food. Increasing returns to scale implies that output grows faster than the labor force. The only thing a country would need to do for its residents to become wealthier is to increase its labor force while maintaining the capital-labor ratio.
|Returns to Scale - the change in production that occurs when all resources are proportionately increased (increased by the same percentage). If labor, capital, and all other inputs to the production process increase by 1%, does output increase by more than 1% (increasing returns to scale), less than 1% (decreasing returns to scale), or exactly 1% (constant returns to scale)?|
When we assume constant returns to scale we can show mathematically (see Appendix C) that εK + εL = 1. Thus, equation (4) simplifies to reveal that the growth rate of output equals the growth rate of the labor force in steady state as shown in equation (5).
|ΔY = n
The surprising implication of this model is that the level of capital, K, has no affect on the long-term growth rate of an economy. The workers in one country can be loaded with the best tools and factories while another country's workers may be barely equipped, yet the steady-state aggregate economic growth rates of both countries can be identical if their populations are growing at the same rates. While output per worker will be greater for the country with the higher capital-labor (K/L) ratio, output per worker will not change in either country unless there is a change in total factor productivity. In other words, in steady state the economies of countries will grow at the same rate as their populations but the output of each individual worker remains unchanged. Economies can grow faster than their populations only if productivity improves.
In the opening of this section we made the simplifying assumption that technological change is assumed to be zero and total factor productivity remains unchanged. This is a very strong assumption. If instead technological change, A, in the production fucntion equation (1) and growth accounting equation (2) is some positive number then it should be obvious that economic growth is a combination of technological change and the growth rate of the labor force. This is more realistic but it doesn't change our primary observation that the level of capital does not affect the long-run growth rate of the economy in steady state.
The representation for technological change, A, in equation (1) is itself a strong assumption. This implies that technology advances at a constant rate over time and is neutral in that it relates to capital and labor in the same way. In other words, we get the same rate of productivity improvement regardless of the relative levels of capital and labor. The model can be modified to account for differences in technological change between the factors of production. For example, we could introduce a labor productivity factor, which would result in maintaining a steady state capital-output rather than capital-labor ratio (referred to as Harrod neutrality). We still get a similar result. Output grows at the same rate as the labor force plus the rate of technological change. The assumptions and result are perhaps more consistent with what we actually observe in most economies, but do not change the implications of the model.
So far we have established that the steady-state growth rate of aggregate output depends only on the rate of growth of the labor force (and technological change). The capital-labor ratio influences only the level and not the steady-state growth rate of an economy. For example, let's consider a rich and a poor country. Assume the per worker output of the rich country is 50% greater than that of the poor country. If the labor force growth rate of both countries is 2% per year then we can expect the aggregate output of both countries to double in 35 years. After 35 years the per worker output of both countries will remain unchanged and the rich country will still produce 50% more output per worker than the poor country.
|The 2% growth and doubling in 35 years is a simple rule-of-thumb that we get from the "Rule of 70." The rule of 70 states that the approximate number of years it takes a variable to double is 70 divided by the annual growth rate.|
Increasing the level of capital and the capital-labor ratio may not contribute to a permanently higher economic growth rate but it does give a short-term boost to output per worker and the standard of living. If countries can choose the amount of capital employed and raise or lower the capital-labor ratio is more capital always better? The next surprising result of our analysis is that a higher output per worker is not necessarily better. To see this result we need to expand the neoclassical model to include savings and investment in the Solow growth model.
A constant returns to scale production function was a key assumption that allowed us to simplify the growth accounting equation. In the Solow growth model we again put the constant returns assumption to use in a slightly different way.
A production function that represents constant returns to scale is represented in equation (6). We still assume that technological change, A, is zero. The constant returns production function, as noted earlier, implies that if we multiply both the quantity of labor and the quantity of capital by some positive number, z, then we also multiply output by z.
|z Y = f (z K, z L)||(6)|
If we assume the value of z is equal to 1/L, then we can convert the aggregate production function in equation (6) to a per worker production function in equations (7) and (8)
|Y = f (K/L, 1)
|Y = f (K/L)
So far nothing has changed from what was presented in the previous section. We have simply transformed the aggregate production function to a per worker basis, which is illustrated in Figure 3-3. Per worker output, Y/L, in Figure 3-3 is determined by the capital-labor ratio, K/L.
The production function has a positive slope because an increase in capital per worker, K/L, results in an increase in output per worker, Y/L. The slope of the production function represents the marginal product of capital. Each one unit increase in the capital-labor ratio increases output per worker by an amount equal to the marginal product of capital. The bowed shape of the production function implies diminishing marginal productivity of capital. Each incremental increase in capital with the amount of labor held constant (i.e., the capital-labor ratio increases) produces progressively smaller increases in output.
|Marginal Product - the change in the quantity of total output resulting from a unit change in a variable input, keeping all other inputs unchanged. The marginal product of capital is the change in output resulting from a 1 unit change in capital with the quantity of labor and other inputs held constant.|
Second, we assume the national savings rate is some fixed fraction of total output as shown in equation (9):
|S = s Y||(9)|
Savings on a per worker basis is shown in Equation (10). Output per capita is again plotted as a function of the capital-labor ratio in Figure 3-4. Output per capita is multiplied by the fixed savings rate, s, to obtain the lower dashed savings per capita line.
|S = s Y
Third, investment is by definition equal to the change in the capital stock plus depreciation. We must not only equip new workers with capital but must also replace existing equipment that wears out or becomes obsolete. The change in the capital stock required to equip new workers is the labor force growth rate, n, times the current level of capital, or n K. Let d represent the capital depreciation rate, or the fraction of capital that wears out each year. The amount of capital that must be replaced every year because of depreciation is the depreciation rate, d, times the level of capital, or d K. Thus, total investment in steady state is represented by equation (11).
|I = n K + d K
= (n + d) K
Investment on a per worker basis is presented in equation (12) and illustrated in Figure 3-5.
| I = (n + d) K
Fourth, we assume a steady state condition that savings equals investment. Steady state in Figure 3-6 is represented by the point the savings line crosses the investment line. If the capital-labor ratio is less than the steady-state level savings exceeds the level of investment needed to maintain this low capital-labor ratio. The extra savings is invested in new capital, which makes the capital-labor ratio increase until the steady-state point is reached. Similarly, if the capital-labor ratio is to the right of the steady state point we have the opposite situation. Savings is less than the amount needed to finance the investment required to replace depreciated capital and to equip new workers. The capital-labor ratio declines until the steady state level is reached.
One of the common empirical observations made when comparing economic growth rates across countries is the inverse relationship between wealth and population growth. The poorest countries generally appear to have the highest rate of population growth. The sociological reasons for high population growth rates among the poor are beyond the scope of this course. But we can use the Solow growth model to see why there is a relationship between population growth rates and wealth.
Consider two countries with identical production functions, savings and capital depreciation rates but different population growth rates. The identical production function assumption simply means that the same technology is available to both countries. Any differences in wealth are not due to political barriers that prevent the export of new technology from one country to another.
We can illustrate the effect of different population growth rates on wealth in Figure 3-7. Country B has a higher population and labor force growth rate than country A. This difference is reflected in the different investment lines in Figure 3-7 labelled IA for country A and IB for country B. The IB investment line is higher than the IA investment line because the slope, n + d, is higher, which reflects to the higher population growth rate, n. The consequence of the higher population growth rate is the output per worker is lower.
The logic is pretty simple. A given rate of savings will support a some amount of investment in capital goods. With more people continually entering the labor force in a country with the higher population growth rate the available capital goods must be spread more thinly. Each of the new and existing workers get fewer tools to work with and consequently produce less output. While the aggregate economic growth rate of a poor country may be greater than a wealthy country (because of the higher population and labor force growth rates), the poor country will remain poor relative to the wealthy country.
We can look at the effect of the savings rate on wealth in the same way we considered population growth except that we assume now the population growth rates of two countries are identical while the savings rates differ. A change in the savings rate, s, is required for an economy to move to a different capital-labor ratio. We show this in Figure 3-8 with a low savings rate country A and high savings rate country B. A poor country with low savings must operate at a low capital-labor ratio. A wealthy country with a higher savings rate may operate at a higher capital-labor ratio. A poor country can increase its wealth by increasing its savings rate thereby moving from a low K/L steady state to a higher K/L steady state over time.
Why is one country poor and another wealthy? One answer lies in the capital-labor ratio. The wealthy country must have a higher capital-labor ratio than the poor country. But differences in capital-labor ratios and wealth do not affect aggregate economic growth rates in the neoclassical model. The steady state rate of growth of aggregate output of an economy is equal to the labor force growth rate. While one country may be poorer because of a high population and labor force growth rate, its aggregate economic growth rate may still be higher. But it won't catch up unless it can raise its capital-labor ratio.
For the poor country to catch up it must increase the level of capital employed per worker. However, the cruel dilemma of many poor countries is that increased investment requires savings to finance that investment. A higher savings rate means less current consumption. A higher savings rate may be impossible when only the basic necessities of living can be afforded in the poor country.
There is an incentive for capital to flow from the rich country to the poor country because of the declining marginal productivity of capital. If we were to move one unit of capital from the rich to poor country we would give up some small amount of output in the rich country but gain a much larger amount of output in the poor country. In the neoclassical model there is an economic incentive for convergence of the capital-labor ratios across countries.
We have seen an example of convergence with the impressive growth if the Asian "tigers" (such as Japan, Hong Kong, and Korea) over the last 50 years. The infrastructure of the east Asian nations had been decimated by World War II. Capital-labor ratios were comparatively very low. Through very high savings and investment rates their capital-labor ratios progressively grew. Economic growth came not just from labor force and technological change like other industrialized countries, but also because of an increase in their capital-labor ratios, which moved them up the production curve. This rapid capital intensification really can't go on forever. The declining marginal productivity of capital makes it uneconomical to sustain growth over the long term by increasing capital inputs alone. In fact the 1990s suggest that the envious growth rates of the Asian tigers has come to an end. The Asian tigers spent 40 years catching up and now they are just like us.
So why do some countries remain terribly poor? The answers are varied but often relate to barriers to the international flow of investment. For example:
While these explanations have appeal they still seem inadequate to explain differences in wealth across countries that last decades and even centuries. In the next section we will briefly describe a recent development in economic growth theory, the endogenous growth model, that attempts to explain why differences may persist and convergence may not occur.
If wealth increases with savings and investment is a higher savings rate always better? In our final application of the Solow growth model the surprising answer is no. It is not output per person that indicates wealth or welfare but consumption per person. We are supposedly better off when we can consume more.
There are only two things we can do with the output we produce: we either consume it or use it as investment to produce other goods and services. Consumption is the difference between total output and investment (or savings) as represented by equation (13).
|C = Y - I
= Y - (n + d) K
And consumption in equation (13) can be represented on a per worker basis by dividing through by the size of the labor force, L, in equation (14):
|C = Y - (n + d) K
L L L
Consumption per worker is shown in Figure 3-9 as the distance between output per worker and investment per worker.
Notice in Figure 3-9 that as we increase the capital-labor ratio from a zero starting point the distance between output and investment gets progressively larger. Output rises faster than required investment and consumption per worker rises as the capital-labor ratio increases. At some point a maximum distance is reached and any further increases in the capital-labor ratio reduces the distance between output and investment per worker. Consumption begins to decline because of the curvature of the production function and the declining marginal productivity of capital. Maximum consumption occurs at the point where the slope of the per worker production function equals the slope of the investment line.
An increase in the capital-labor ratio has two opposing effects on consumption. First, a higher capital-labor ratio enables each worker to produce more output, which allows for an increase in consumption. Second, a higher capital-labor ratio requires more ongoing investment to replace worn out capital and equip new workers, which reduces consumption.
The implied level of consumption in Figure 3-9 is plotted in Figure 3-10, which reveals the effect of the trade-off between more output and more investment on consumption. Starting with low levels of capital, increases in the capital-labor ratio allow workers to consume progressively more. At some point a maximum is reached where the declining marginal productivity of capital is no longer large enough to support the level of investment required and consumption begins to decline. The point of maximum consumption is known as the Golden Rule capital-labor ratio (E.S. Phelps, "The Golden Rule of Accumulation: A Fable for Grown men," American Economic Review, 55, September 1965, 638-643). The Golden Rule suggests that it is possible to save and invest too much.
In the neoclassical growth model, the steady state growth rate of output per worker is exogenous, i.e. determined outside of the model. Increases in output per worker are only possible through technological change and increases in productivity. But the neoclassical model is silent as to the sources of technological change. Technological change is simply assumed to occur at some rate, which is independent of any macroeconomic variables (and is sometimes described as "manna from heaven").
Endogenous growth theory, first developed by Paul Romer and Robert Lucas (Paul Romer, "Increasing Returns and Long-Run Growth," Journal of Political Economy, October 1986, pp. 1002-1037, and Robert E. Lucas, Jr., "On the Mechanics of Economic Development," Journal of Monetary Economics, July 1988, pp. 3-42), focuses on the sources of technological change. Technological change is endogenous in their models, explained by the level of savings and investment. Countries that save more have greater increases in productivity and economic growth rates.
The key feature of endogenous growth models is that the marginal productivity of capital is no longer assumed to be decreasing. In the neoclassical model, the skills of the labor force do not increase as the level of capital increases and we have declining marginal productivity of capital. In endogenous growth models, the growing skills of the labor force may complement increases in capital.
The education, training, and skills of the labor force are referred to as human capital. As economies accumulate physical capital and become wealthier they devote more resources to education, training, and research and development. This investment in human capital increases productivity. If human capital increases at the same rate as physical capital then we could have constant or even increasing marginal productivity of physical capital.
We can identify the implications of the endogenous growth model by starting with a simple aggregate production function with the assumption of a constant rather than declining marginal productivity of capital. To simplify our model we also assume the number of workers does not change and accept the output and labor force growth rate relationship implied by the neoclassical model. This simplification means that the growth rate of aggregate output will be identical to the growth rate of output per worker in this model. We have in equation (15) a direct relationship between aggregate output and the level of physical capital.
|Y = α K||(15)|
In equation (15) each additional single unit of physical capital, K, increases aggregate output, Y, by α units regardless of the level of capital employed. Because α does not depend on the level of capital we have constant marginal productivity of capital. The growth rate of output is equal to the growth rate of capital as shown in equation (16).
|ΔY / Y = ΔK / K||(16)|
We add to this model a representation for savings in equation (17). Aggregate savings is assumed to be constant at a fixed proportion, s, of total output.
|S = s Y
= s α K
Investment equals additions to the level of capital plus depreciation, equation (18), just as it did in the neoclassical model:
|I = ΔK + d K||(18)|
In steady state aggregate savings must equal aggregate investment:
|s α K = ΔK + d K||(19)|
Rearrange equation (19) to solve for the change in the physical capital stock:
|ΔK = s α K - d K||(20)|
Divide both sides of equation (20) by K to get the growth rate of the physical capital stock:
|ΔK / K = s α - d||(21)|
Substitute the solution for the growth rate of capital stock in equation (21) back into the output growth equation (15) and we get the result in equation (22) that the growth rate of output is a function of the savings rate:
|ΔY / Y = s α - d||(22)|
The result that the growth rate of both total output and output per worker are a function of the savings rate is a significant departure from the neoclassical model. Savings affects long-run growth in the endogenous growth model because higher rates of saving lead to greater investment in human capital, which contributes to greater labor productivity.
Figure 3-11 presents a scatter plot of annual average savings rates (as a percentage of real GDP per worker) and real GDP per worker growth rates across countries over the period 1950 through 2000. The dashed linear regression line in Figure 3-11 indicates that each 10 percent increase in the average savings rate increases the growth rate of real GDP per worker by 0.45 percent. The United States falls right in the middle of the graph with an average savings rate of 19.2 percent and a real GDP per worker growth rate of 1.95 percent.
The neoclassical growth model suggested that standards of living of the poor and wealthy countries should converge on each other. The assumed declining marginal productivity of capital suggests that investment should migrate from the wealthy countries to the poorer countries, which have lower capital-labor ratios and higher marginal productivity from new capital investment. While there is some capital migration and some poor countries have converged on the wealthy in recent decades (in particular, the Asian tigers such as Hong Kong and japan), this usually seems the exception rather than the case.
The endogenous growth model provides an explanation for the absence of convergence. When there is a constant or even increasing marginal productivity of capital we should not expect the migration of investment from wealthy to poor countries as implied by the neoclassical model. Moreover. while there may be no barriers to the movement of physical capital from wealthy to poor countries, human capital cannot be transferred so easily. Differences in standards of living and economic growth rates may be sustained over time and we may not see convergence.
We saw that the neoclassical and endogenous growth models provided some conflicting implications with respect to convergence between poor and wealthy nations. The implications for the role of government policy in promoting long-run economic growth is also somewhat conflicting.
Government policies designed to promote long-run economic growth can generally be placed into one of three categories:
Policies that promote savings generally also stimulate capital investment and vice versa because of the impact on interest rates. A higher savings rate should lower real interest rates thus lowering the cost of borrowing for investment. Policies that stimulate investment would tend to raise the interest rate and provide an increased incentive for savings.
Most politicians and some economists have at some point chastised the American public for not saving enough. Perhaps the most significant forced-saving policy enacted in this country was Social Security. More recently, tax policies have been enacted to promote individual retirement accounts. Perhaps the biggest role in saving is played by the government. When a government spends more than it earns it reduces the rate of national saving. A household may be saving money but some of those funds go to the government for its own consumption spending.
Almost any policy designed to lower the cost of doing business will indirectly stimulate investment spending.
The neoclassical model suggests that policies designed to increase savings and investment spending may lead to a sustained increase in the standard of living, but only a temporary boost to economic growth. If a nation increases its rate of savings there are more funds available for investment. The capital-labor ratio rises with the resulting increase over time in the standard of living. But increasing savings and investment is not painless. Some current consumption must be foregone when the rate of saving increases. The payoff to a higher savings rate is not immediate but delayed. Moreover, the effect on the long-run economic growth rate is only temporary. With a higher savings rate the economy moves to a new steady-state capital-labor ratio and the economy returns to its previous long-run growth rate, although at a higher standard of living. If the savings or investment policy is not maintained then the opposite occurs. Investment falls off, we have a mini boom in consumption spending, the capital-labor ratio declines, and the economy returns to its original path with and standard of living.
The endogenous growth model, on the other hand, suggests policies that promote savings and investment may lead to a sustained increase in the growth rate of the economy, as long as those savings contribute to investment in new technology and human capital. Because growth in output and the standard of living is a function of the savings rate and human capital, the endogenous growth model provides a role for government policy that was absent in the neoclassical model. Governments can promote economic growth through policies that encourage savings as well as research and development, education, and health.
Production is a function of the economy's use of capital, K, labor, L, and a productivity index, A.
|Y = A f(K, L, t )||(A1)|
Take the total derivative of equation (A1) with respect to time, with df(.) used as a abbreviation for df(K,L,t):
|dY = dA f(K, L, t ) + A df(.) dK + A df(.) dL + A df(.)
dt dt dK dt dL dt dt
We can make equation (A2) easier to read by abbreviating the time derivations:
|ΔY = ΔA f(K, L, t ) + dY ΔK + dY ΔL
Divide equation (A3) through by Y, or the equivalent A f(K, L, t):
|ΔY = ΔA + dY ΔK + dY ΔL
Y A dK Y dL Y
Multiply the second term on the right-hand side of equation (A4) by K/K and the third term by L/L.
|ΔY = ΔA + dY K ΔK + dY L ΔL
Y A dK Y K dL Y L
Finally, we can simplify equation (A5) by representing the term (dYK)/(YdK) as the elasticity output with respect to capital and the term (dYL)/(YdL) as the elasticity of output with respect to labor. For example the elasticity of output with respect to capital is the percent change in output, dY/Y, divided by the percent change in labor, dK/K.
|ΔY = ΔA + εK ΔK + εL ΔL
Y A K L
Equation (A6) is the basic growth accounting equation that decomposes growth in output into three parts:
Because elasticities are not observable (without knowing form of production function), they cannot be used in empirical analysis. But we can transform elasticities into income shares that can be calculated using government GDP/income survey data.
Assume a perfectly competitive market. The first-order condition of the microeconomic profit maximization problem is that inputs are paid the value of their marginal products, as represented in equation (B1).
|dY = r
The definition of the elasticity of output with respect to capital is presented in equation (B2):
|εK = dY K
Substituting the profit maximization first-order condition that dY/dK = r/p from equation (B1) into equation (B2), we get the result in equation (B3) that the elasticity of output with respect to capital is equal to the income share to capital. The income share of capital equals the rents paid to the owners of capital for the use of their equipment, r K, divided by the value of total sales, p Y.
|εK = r K
The same procedure can be used for labor where the profit maximization first-order condition is dY/dL = w/p, where w is the wage rate.
If we assume constant returns to scale the elasticities of output with respect to capital and labor sum to 1. We start with the standard profit equation (C1).
|Profits = p Y - r K - w L||(C1)|
Some basic microeconomics (that we will not show here) reveals that constant returns to scale implies total profits equal zero as in equation (C2).
|0 = p Y - r K - w L||(C2)|
|p Y = r K + w L||(C3)|
Divide both sides of equation (C3) by p Y:
|p Y = r K + w L
p Y p Y p Y
Equation (C4) simplifies to:
|1 = r K + w L
p Y p Y
Equation (C5) shows that the income shares of capital (rK/pY) and labor (wL/pY) sum to 1. In the previous section we showed that elasticities of output are equivalent to income shares. Thus the elasticities of output with respect to capital and labor sum to 1.
|1 = εK + εL||(C5)|
File last modified: June 1, 2005
© Tancred Lidderdale (email@example.com) |
area of a right triangle worksheet drawing triangles worksheets pdf.
area of shapes rectangles and triangles worksheets drawing angles.
geometry worksheet high school drawing triangles worksheets grade 6.
trace draw and find triangle shape drawing triangles worksheets constructing worksheet ks2.
kindergarten drawing shapes circles triangles math worksheets constructing worksheet 7th grade.
troublesome triangles education drawing worksheets constructing worksheet 7th grade.
geometry find the missing angle in triangle set 1 drawing triangles worksheets worksheet pdf.
triangle archives the catholic kid coloring pages and drawing triangles worksheets constructing worksheet ks2.
constructing an angle or triangle using a protractor drawing triangles worksheets worksheet pdf.
using variables to draw triangles in logo worksheet drawing worksheets grade 5.
lines and angles worksheets maths drawing triangles constructing worksheet grade 5 pdf.
math worksheets grade 4 area triangle drawing triangles constructing worksheet tes.
find the missing angle measure worksheet drawing triangles worksheets constructing grade 5.
shape activity worksheet triangle drawing triangles worksheets constructing pdf.
calculating angles of scalene triangles worksheet drawing worksheets constructing grade 5.
angles of triangles worksheet missing in a drawing worksheets grade 5. |
Unit 2: Managing the economy
There are two different types of economic growth, known as actual growth and potential growth.
Actual growth is measured as increases in real GDP, and potential growth is an increase in the capacity in the economy.
Actual out means the real output which the country produces with the current employment of factors of production. And we have to note here that not all the available resources are employed at any given time.
However, potential output means what the economy could produce if all the resources are fully employed. Therefore, if there is an increase in resources, or if there is increase in productive capacity of the economy, then we say that the there is potential economic growth.
It is also important to compare the actual and potential growth, which is also known as the output gap. The output gap signifies the economy is operating with spare capacity, which also means unemployment. Therefore, if the output gap is too big, the unemployment will be a concern for the economy. However, if the aggregate demand exceeds aggregate supply, which also means the economy is trying to operate at overcapacity, there will be the problem of inflation.
Growth can be achieved by increases in the components of aggregate demand, for example an increase in consumer spending. The size of this increase depends on the size of the multiplier, and therefore any changes in injections and leakages will have an impact on the degree of change in growth.
the following diagram shows increase in aggregate demand which results in increased output.
As we can see that the outward shift of AD increased real national output. A positive change to any component of aggregate demand ((C+I+G+X-M)) will increase aggregate demand and can result in economic growth in the short run. However, price also could increase as we can see from the diagram. The more inelastic the AS curve is, the higher the increase in price will be due to any increase in aggregate demand. That means If there is spare capacity in the economy then an increase in AD will cause a higher level of real GDP.
AD can increase for the following reasons:
Lower interest rates – Lower interest rates reduce the cost of borrowing and so encourages spending and investment.
Increased wages. Higher real wages increase disposable income and encourages consumer spending.
Increased government spending (G).
Fall in value of the country’s currency which makes exports cheaper and increases quantity of exports(X).
Increased consumer confidence, which encourages spending (C).
Lower income tax which increases disposable income of consumers and increases consumer spending (C).
Economic growth can also be achieved by an increases or improvements in any of the factors of
production, eg productivity growth or immigration. The effect is to shift the aggregate supply curve to the right.The diagram on the left shows shor-run AS curve.
Economic growth can also be shown by a long run rightward shift of the AD and AS Curves shown in the diagram below.
– Increased capital. e.g. investment in new factories or investment in infrastructure, such as roads and telephones.
– Increase in working population, e.g. through immigration, higher birth rate.
– Increase in Labour productivity, through better education and training or improved technology.
– Discovering new raw materials.
– Technological improvements to improve the productivity of capital and labour e.g. Microcomputers and the internet have both contributed to increased economic growth.
Similarly, if there is any opposite change to the above causes, it will turn out to be a constraint on economic growth.
Real economic growth stimulates higher employment since labour is a derived demand. An increase in real GDP should cause an outward shift in the aggregate demand for labour. Not all industries will share in the growth of an economy.
The accelerator effect of growth on capital investment: Rising demand and output encourages investment in capital – this helps to sustain GDP growth by increasing LRAS.
Higher revenue for the government
Growth has a positive effect on Government finances – boosting tax revenues and helping to reduce the budget deficit. More people in work, rising spending and higher company profits all contribute to an increased flow of revenue to the Treasury.
Greater business confidence: Growth has a positive impact on profits & business confidence.
Improvements in living standards: Growth is an important avenue through which per capita incomes can rise and absolute poverty can be reduced in developing nations.
If the economy grows too quickly there is the danger of inflation as demand races ahead of the ability of the economy to supply goods and services. Producer then take advantage of this by raising prices for consumers.
Fast growth can create negative externalities (increased pollution and congestion) which damages overall social welfare
Not all of the benefits of economic growth are evenly distributed. We can see a rise in national output but also growing income and wealth inequality in society. There will also be regional differences in the distribution of rising income and spending.
In O Level tutorials, we learned about individual demand and supply curves and how equilibrium is determined at micro-economic level. Now, let us look at how price level and equilibrium level of real output is determined at macro-economic level.
Macroeconomic equilibrium for an economy in the short run is established when aggregate demand intersects with aggregate supply. This is shown in the diagram below.
At the price level P, the aggregate demand for goods and services is equal to the aggregate supply of output. The output and the general price level in the economy will tend to adjust towards this equilibrium position.
If the price level is too high, there will be an excess supply of output. If the price level is below equilibrium, there will be excess demand in the short run. In both situations there should be a process taking the economy towards the equilibrium level of output.
When the Aggregate Demand Curve shifts to its right from AD to AD1, the the price level increases from P to P1, and the output level increases from Y to Y1
Anything that affects the components of aggregate demand (consumption, investment, government spending and net exports) will shift the AD curve.
Aggregate Demand can increase or decrease depending on several things. In effect, these things will cause shifts up or down in the AD curve. These include:
Exchange Rates: When a country’s exchange rate increases, then net exports will decrease and aggregate expenditure will go down at all prices. This means that AD will decrease.
Distribution of Income: This is directly related to wages and profits. When worker’s real wages increase, then people will have more money on their hands because their overall income has increased. When this happens they tend to consume more causing the consumption expenditures to increase.
Expectations: Consumers tend to have certain expectations about the future of the economy and will adjust their spending accordingly. If they would expect the economy to not do so well in the future, saving would increase thus decrease overall expenditures. Rising price levels will cause aggregate demand to increase. If consumers foresee the price level to rise in the near future, they might just go out and buy that good now, increasing the consumption expenditures in AD. Many different expectations have the capacity to increase or decrease aggregate demand and it is not always clear as to how this will happen.
Foreign Income: This relates the country’s economic output with the income of its trading partners in the world. When foreign income rises, the country’s exports will increase causing aggregate demand to increase.
Monetary and Fiscal Policies: The government has some ability to impact AD. They can spend money or increase taxes in order to influence how consumers spend or save. An expansionary fiscal policy causes AD to increase, while a contractionary monetary policy causes AD to decrease.
Suppose that increased efficiency and productivity together with lower input costs (e.g. of essential raw materials) causes the short run aggregate supply curve to shift to its right. (i.e. an increase in supply – assume no shift in aggregate demand).
The diagram shows what is likely to happen. AS shifts outwards and a new macroeconomic equilibrium will be established. The price level has fallen and real national output (in equilibrium) has increased to Y2.
An injection such as an increase in exports means that there is an immediate increase in AD. But the extra income raised by selling goods abroad will raise incomes of those making the goods and services, and this income will be spent in the economy. Whatever is not spent on withdrawals will cause second round increases in AD, which leads to further rounds of income and spending. These knock on effects are the multiplier effects of an increase in injections, and the process work in reverse when injections fall — a reverse multiplier, or multiplied contraction of AD.
Next topic: Causes, costs and constraints on economic growth
In O Level Economics lessons, we learned that the ‘supply’ means willingness and ability of producers/suppliers to offer goods and services for sale. The quantity of goods and services supplied increases as the price goes up. Therefore, if we represent this in a diagram, the supply curve is upwards sloping.
If we add up the supply curves of all producers in the economy, we can develop an Aggregate Supply Curve(AS). Aggregate supply curve shows what happens to the total output of all the goods and services in the economy as the general price level changes. Just like individual supply curves, AS curve also slopes upwards because, producers as a whole will expand the amount they are willing to supply as prices rise. Therefore, AS represents the ability of an economy to deliver goods and services to meet demand.
The nature of this relationship will differ between the long run and the short run
Short Run Aggregate Supply (SRAS) shows total planned output when prices in the economy can change but the prices and productivity of all factor inputs e.g. wage rates and the state of technology are held constant.
In the short run, the Aggregate Supply curve reflects a positive relationship between the price level and the real quantity of National Output.
This short-run positive relationship occurs primarily because production costs (e.g., wages) are “sticky” relative to output prices when demand changes. Increases to Aggregate Demand cause movements up along the Aggregate Supply curve in which prices rise more quickly than wages, so higher profit per unit induces more output. Declines in Aggregate Demand reverse these movements along the Aggregate Supply curve – prices fall more quickly than costs, so profits decline and firms reduce production.
The short-run aggregate supply curve shifts under similar circumstances as individual supply curves.
So, anything that is able to change the factor costs will be a shift factor of Short-Run Aggregate Supply.
|Shift Factor||The change to AS Curve||Reason|
|Increase in labour force or capital stock||AS Curve will shift to its right||More output can be produced at every price level|
|Increase in productivity||AS Curve will shift to its right||Fall in the unit costs of production||Increase in the expected future price level||AS Curve will shift to its left||Workers and firms increase wages and prices||Increase in government taxes||AS Curve will shift to its left||Costs increase|
Long run aggregate supply (LRAS): LRAS shows total planned output when both prices and average wage rates can change – it is a measure of a country’s potential output and the concept is linked to the production possibility frontier.
In the long run, the LRAS curve is assumed to be vertical (i.e. it does not change when the general price level changes)
Aggregate demand (AD) is the total demand for final goods and services in the economy at a given time and price level. It is the amount of goods and services in the economy that will be purchased at all possible price levels.
Aggregate means ‘total’ and in this case we use the term to measure how much is being spent by all consumers, businesses, the government and people and firms overseas.
This diagram shows the downward sloping Aggregate demand curve. AD curve is not always a straight line. Many argue that AD curve is actually a rectangular hyperbola.
The total amount spent is likely to be fairly constant along the AD, and therefore the area under
the AD is likely to remain fairly constant, as in the rectangular hyperbola.
There are various reasons why the AD curve is sloping downwards.
One reason is that, at higher prices, an economy’s export is likely to decrease and imports tend to increase. That means the total net exports(X-M) will decrease => see below for the components of AD.
Another argument for the downward sloping AD is that at higher prices the interest rate is likely to be higher, meaning that investment (a component of AD) is lower. They might also save more.
Aggregate demand (AD) = total spending on goods and services
AD = C + I + G + (X-M)
C: Consumption, this includes demand for durables e.g. audio-visual equipment and motor vehicles & non-durable goods such as food and drinks which are “consumed” and must be re-purchased.
I: Capital Investment – This is spending on capital goods such as plant and equipment and buildings to produce more consumer goods in the future. Investment also includes spending on working capital such as stocks of finished and semi-finished goods.
G: Government Spending – This is spending on state-provided goods and services including public goods and merit goods .
Government spending is by central and local government on goods and services. While to some extent this spending is determined by the fiscal policy of the government, it is also largely dependent upon the business cycle. In a boom, tax receipts increase and the demands on government spending will fall, and vice versa in an economic slowdown.
Changes in G are likely to have a large multiplier effect, in that the spending changes have a direct impact upon the spending in the economy.
(X-M) = Exports – Imports: Net exports measure the value of exports minus the value of imports. When net exports are positive, there is a trade surplus (adding to AD); when net exports are negative, there is a trade deficit (reducing AD).
Next topic: Aggregate Supply
Definition: Money is anything which is universally acceptable as a medium of exchange. Therefore if we can buy goods and services with it then it could be seen as money.
In ancient age, before people started to use money, barter system was used to exchange goods for one another. Barter system was useful because it allowed people to exchange what they had in excess for things which they did not have. However, there were problems of barter system. For example, how many goats would be exchanged for one cow? How many bags of rice for one bag of wheat flour?
There was also another problem, if A has rice, and wants wheat flour, B has wheat flour but wants only fish, then barter system cannot satisfy their wants, unless there is C who has fish and wants rice. So A has to go to C and exchange rice for fish and then only A can go to B to get the fish exchanged for wheat flour.
The creation of money solved this problem.
Read the story “The Goldsmith Who Became a Banker — A True Story” to get an idea of how people started using money in ancient days.
Also read A brief history of Money
Medium of Exchange – When money is used to intermediate the exchange of goods and services, it is performing a function as a medium of exchange. It thereby avoids the inefficiencies of a barter system. Exchange is easier and less time consuming in a money economy than in a barter economy.
Measure of Value / Unit of Account – e.g. 1 apple = MVR5, while a can of Redbull = MVR25. In a barter system (as described above), even if a double co-incidence of wants is found, there is no common unit of measure. In today’s world, each and every country has money. Therefore, determining the relative prices is very easy and quick.
Store of Value – To act as a store of value, a money must be able to be reliably saved, stored, and retrieved – and be predictably usable as a medium of exchange when it is retrieved. The value of the money must also remain stable over time.
Standard for deferred payments – Money is also inevitably used as the unit in terms of which all future or deferred payments are stated. Future transactions can be carried on in terms of money. The loans, which are taken at present, can be repaid in money in the future. The value of the future payments is regulated by money.
Money must be durable, which means it should be usable for a long time and must be of good quality. It should not be something that gets damaged easily or spoiled in a short period of time. Since money is durable, it can be used as a store of wealth/value.
Since anything to have economic value, it must be scarce. Money is scarce and that’s why it has value. People can accept something as money only if it has value.
Money must be something that people can easily carry with them from one place to another. Today paper currency is used instead of gold and silver because paper currency is more portable.
Money must be something that everyone can accept for a unit of account and medium of exchange.
Money must be something that can measure all the goods and services accurately. For this purpose, money must be something that we can divide into small denominations.
Money must be something which has a relatively stable value over time. It should not lose its value over time. Its function as a store of value can be fulfilled only if its value is stable. |
Direct taxes are paid directly by individuals or businesses to the Government, while indirect taxes are taxes paid on goods and services by the consumer. Eligible persons should pay the taxes they are applicable to according to the country's tax laws. Hence, knowing about taxes and their importance is necessary. Here, we guide you through the difference between direct and indirect taxes.
Taxes are one of the prominent sources of income for the government of every country. These taxes are charged in various ways- salary, paying for meals at a restaurant, paying tolls when driving cars, or purchasing groceries at general stores. Being responsible citizens, it is our duty to pay taxes, and it is essential to be aware of the different types of taxes imposed on us. Taxes are of two types- Direct Taxes and Indirect Taxes.
In this session, we learn the differences between Direct and Indirect Taxes.
What are Taxes?
A tax is defined as a mandatory financial charge or any other type of charge imposed on a taxpayer by a governmental organization for funding governmental and various expenses.
Most countries have a tax policy, to pay for public, societal, or approved national needs and for the smooth functioning of the government. In some cases, taxes are charged at a flat percentage rate on personal income, while most scale taxes are increasing as per the ranges of annual income.
Taxation plays a vital role in a country's economy by providing revenue for the government to fund various projects and services such as infrastructure development, education, healthcare, and public safety. They can also influence economic behavior, such as encouraging people to save or invest by offering tax incentives or discouraging activities that are harmful to society, such as smoking, by imposing high taxes on Tobacco and other products.
Generally, Taxes can be classified into two types,
- Direct Taxes
- Indirect Taxes
Every tax that exists will be included in direct and indirect taxes. The Tax and its policies differ in a lot of countries, and it is necessary for individuals and organizations to do deep research, understand and follow the best tax practices while earning an income or starting a business.
Differences between Direct and Indirect Taxes
It is essential to understand the difference between direct and indirect taxes because they have different economic impacts and affect different people in different ways. Direct Taxes have a direct impact on the taxpayer's disposable income and can affect their spending behavior, savings, and investment decisions. Whereas, Indirect Taxes can affect consumer behavior by making certain goods or services more expensive, which may lead to reduced demand for them.
A Direct Tax is a type of tax paid directly to the authority that charges the tax. For instance, the government charges income tax and you pay it directly to the government. You may note that direct taxes cannot be transferred to another individual or an entity. In every country, the concerned tax authority has the responsibility to control tax-related activities.
They are typically based on the income or wealth of the taxpayer, and the amount owed is calculated using a progressive tax system, meaning that those with higher incomes pay a higher percentage of their income in taxes. Here is a list of familiar direct taxes available throughout the world.
Once the tax amount is determined, the taxpayer must pay the tax owed to the government. This may be done through a variety of methods, such as electronic transfer, check, or credit card. Failure to pay the tax owed can result in penalties, interest charges, or other legal consequences.
Indirect tax is the tax charged on consuming goods and services. These taxes are not directly charged to a person’s income. However, the taxpayer has to pay the tax along with the cost of goods and services bought by the seller. For example, when you buy a candy bar at the store, the price you pay includes an indirect tax like a sales tax. The store collects the tax from you, and then sends it to the government.
So, even if you don't realize it, you're paying indirect taxes all the time when you buy things. Here is a list of Indirect taxes available throughout the world.
Direct Tax vs Indirect Tax
Direct and indirect taxes are differentiated based on the way they are levied and who ultimately bears the economic burden of the tax. Direct taxes are taxes that are levied on individuals or businesses based on their income, profits, or assets. Whereas, Indirect Taxes are levied on goods and services rather than on individuals or businesses. Indirect taxes are included in the price of the goods or services and are paid by the end consumer.
Here is the difference between Direct and indirect taxes.
|S.no||Direct tax||Indirect Tax|
|1||Direct taxes are paid directly by the taxpayers to the government, and the burden of the tax falls directly on the taxpayer||The burden of the Indirect taxes is indirectly passed on to the consumer, as they ultimately pay the tax through the higher prices they pay for goods and services|
|2||Examples are income tax, property tax (charged on real estate), wealth tax (charged on inherited wealth), and so on. Corporate tax is imposed on corporate businesses.||Examples are Value Added Tax (VAT), GST, central excise duty, etc. VAT is imposed on the price of the product, whereas the central excise tax is imposed on the manufacture and retail of goods.|
|3||Collecting direct tax is a complex task unless it is deducted at the source. This happens in the case of salaried individuals.It is a different situation when collecting taxes from the business classes, where people find ways to avoid taxes, and has been difficult to identify and penalize them||The tax on goods and services is already decided and is charged along with the price of the product. Hence there are no chance of avoiding it.You can always find the taxes mentioned on the cover of a consumer product.|
|4||Direct tax help in improving the economy and controlling inflation.||Imposing indirect taxes leads to improving the economy but can result in inflation.|
|5||Direct Taxes applies only to moderate and high-earning individuals, businesses, and enterprises.||Indirect Taxes have a greater impact on low-income individuals and households than on high-income ones.|
|6||Direct taxes tend to exhaust a part of the income and discourage savings. When people try to avoid paying taxes, the burden of paying them falls over the smaller section of society.||In the case of savings, indirect taxes lessen personal consumption and increase savings. E.g, consumers are cautious about consuming products that are taxed extensively.|
|7||Direct taxes such as income tax plays a role in reducing socioeconomic inequality. The money from taxpayers is used for the welfare of the entire society, and everyone tends to benefit from the same.One of the best examples is Public transport.||Indirect taxes broaden the socioeconomic gap between the rich and the poor. Only the rich could afford better quality products which may be essential for all, while the weaker sections of society may not be able to consume certain goods.|
Should Everyone Pay Both Direct and Indirect Taxes?
Yes, it is important for eligible individuals and businesses to pay both direct and indirect taxes as they are necessary to fund public services and investments. The eligibility criteria differ based on the respective country's tax regulations and the qualifying source of income. The government separately collects both direct and indirect taxes. While direct taxes are imposed on profits and income, indirect taxes are imposed on consumable goods and services. Hence taxpayers must pay the tax regularly to avoid penalties.
Direct Taxes or Indirect Taxes, which plays an important part in the economy?
The direct and indirect taxes are roughly equal in their contribution to government revenue in advanced economies. Both taxes are necessary to fund public services and investments, and they are generally designed to with revenue-raising potential. To validate this, according to the International Monetary Fund (IMF), in 2020, direct taxes accounted for 50.6% of total tax revenues in advanced economies, while indirect taxes accounted for 49.4%.
What are the Direct and Indirect Taxes in the UAE?
In the UAE, the main types of direct and indirect taxes are
- Corporate income tax: a 9% corporate tax is to be levied on Businesses from June 1, 2023.
- Value-added tax (VAT): A 5% VAT was introduced in the UAE on January 1, 2018, on most goods and services.
- Excise tax: A 50% excise tax is applied to tobacco and energy drinks, and a 100% excise tax is applied to carbonated beverages and alcohol.
- Customs duties: Customs duties are applied to certain goods imported into the UAE, although there are exemptions for some products.
Tax Consultancy Services
Tax consultants across the world help you with relevant information on various types of direct and indirect taxes imposed in the country and ensure that people are compliant with the regulations of the government. BMS Auditing is one of the leading firms with highly experienced tax consultants and Tax agents serving global clients from UAE, KSA, Qatar, Bahrain, Oman, India, UK and USA. Our corporate tax service specialists assist you with the best tax compliance for the Tax Authorities to ensure that your business is compliant with the laws and regulations of the country.
The experts also help you with tax-related activities such as registration, return filing, or refund to prevent you from paying penalties.
Have queries regarding direct and indirect taxes? BMS is here to serve you always. |
Drawing a model in 3D is different from drawing an image in 2D. This introduction to drawing basics and concepts explains a few ways you can create edges and faces (the basic entities of any SketchUp model). You also discover how the SketchUp inference engine helps you place those lines and faces on your desired axis.
Table of Contents
Drawing a line
Use the Line tool to draw edges (also called line entities). Edges form the structural foundation of all models. Here’s how to draw a line:
- Select the Line tool () on the toolbar (or press the L key). The cursor changes to a pencil.
- Click to set the starting point of your line. If you click the wrong place, press the Esc key to start over. As you move your cursor around the drawing area, notice the following:
- A line follows your cursor.
- The line length is displayed dynamically in the Measurements box. (The Measurements box uses the units specified in your template.)
- The line that’s following your cursor turns red, green, or blue whenever the line is parallel with the red, green, or blue axis, respectively. If you hover for a moment, a ScreenTip appears, like the On Blue Axis tip shown in the figure. There is no ghost in your machine; that’s the SketchUp inference engine, which you learn more about later in this article.
- Click to set the line’s end point. This end point can also be the starting point of another line. Press Esc or select a different tool when you’re done drawing lines. After you set the end point, you can press Ctrl+Z (Microsoft Windows) or Command+Z (macOS) to undo your line and start over.
- (Optional) To make your line a precise length, type a value and press Enter (Microsoft Windows) or Return (macOS). You can repeat this process as many times as you like until you draw a new line or select another tool. If you don’t specify a unit, SketchUp uses the unit specified in your template. However, you can type any imperial or metric unit for your line. So you can type 3mm or 5’2” for example. Your value appears in the Measurements box as you type.
- An absolute coordinate, such as [3’, 5’, 7’], places the end of the line relative to the current axes. Square brackets indicate an absolute coordinate.
- A relative coordinate, such as <1.5m, 4m, 2.75m>, places the end of the line relative to the starting point of your line. Angle brackets indicate a relative coordinate.
You can edit the length of a line as long as it doesn’t bound a face. Here’s how to edit a line:
- Select the Move tool ().
- Hover the Move tool cursor over one of the line’s end points.
- Click and drag the end point to change the line’s length.
Creating a face
When you join several lines into a shape, they form a face.
Not a funny face, or a scary clown face, or even a cute puppy face. By default, faces are plain, but super important: They’re the other half of the duo, edges and faces, which enable every SketchUp model ever made to exist.
The shape tools — Rectangle, Circle, and Polygon — also create faces. (See Drawing Basic Shapes for more about those tools.)
When you draw a line (or a curve) on an existing face, you split the face.
Opening 3D shapes by erasing edges and faces
You can erase an edge or face to create an opening in a shape. To see how erasing an edge affects your model, first select the Eraser tool () in the toolbar or press the E key, and then click an edge:
- Clicking an edge erases the edge and any face that touched that edge. As Billy Idol almost sang, you can have lines without a face. However, a face must be completely bound by edges.
- Context-clicking a face and choosing Erase deletes only the face.
In the figure, you see the original cube and how erasing an edge or face changes the cube.
Healing deleted faces
If you accidentally delete a face, here’s how to bring it back:
- If you haven’t made any other changes that you’d like to keep, simply select Edit > Undo from the menu bar. Or press the keyboard shortcut for Undo, Ctrl+Z (Microsoft Windows) or Command+Z (macOS).
- Redraw the line that caused the faces to disappear, and SketchUp will re-create the faces.
Finding and locking an inference
SketchUp has an inference engine that helps you work in 3D space. For example, when the Line tool cursor is hovering over the midpoint of another line, the inference engine tells you by displaying a light blue dot and ScreenTip that says, “Midpoint,” as shown here. Every inference has its own color and ScreenTip. (See Knowing your inference types for a full list.)
The inference engine can also help you find geometric relationships between lines. For example, it tells you when a line you’re drawing is perpendicular to another line. In the following figure, notice that a colored dot also appears at the start point of the line, giving you a few bits of information all at once.
Knowing your inference types
SketchUp displays several types of inferences: point, linear, and shape. SketchUp often combines inferences together to form a complex inference. Also, components and dynamic components have their own inference types.
A point inference is based on the exact point of your cursor in your model. The following table lists the point inference types.
|Point Inference Type||What It Looks Like||What It Means|
|Origin point||The point at the intersection of the three drawing axes|
|Origin point||The point at the intersection of the three drawing axes|
|Component Origin Point||The axis origin point within a component and the component's default insertion point|
|Endpoint||End of a line or arcEnd of a line, arc, or arc segment|
|Midpoint||Middle point on a line or edgeMiddle point on a line, edge, or arc segment|
|Arc Midpoint||Middle point on an arc|
|Intersection||Point where a line intersects another line or face|
|On Face||A point that lies on a face|
|On Edge||A point that lies on an edge|
|Center||Center of a circle or arcCenter of a circle, arc, or polygon|
|Guide Point||A guide point|
|On Line||A point along a guide line|
|On Section||Point where a drawing tool creates an edge on a section plane|
|Intersection with Hidden Section||Point where an edge that is generated by a hidden section plane intersects with the drawing tool|
A linear inference snaps along a line or direction in space. In addition to a ScreenTip, a linear inference sometimes displays a temporary dotted line while you draw.
|Linear Inference Type||What It Looks Like||What It Means|
|On Red Axis||Linear alignment to the red drawing axis (Click and drag as you draw to see the inference.)|
|On Green Axis||Linear alignment to green drawing axis (Click and drag as you draw.)|
|On Blue Axis||Linear alignment to the blue drawing axis (Click and drag as you draw.)|
|From Point||Linear alignment from a point; the dotted line’s color corresponds to the axis direction|
|Through Point||Draw from one point, hover over another point then hold Shift to lock the direction from the start of the drawing through the second point.|
|Parallel||Parallel alignment to an edge|
|Extend Edge||Continuation of an existing edge|
|Perpendicular||Perpendicular alignment to an edge|
|Perpendicular to Face||Perpendicular alignment to a face|
|Tangent at Vertex||Arc whose vertex is tangent to a previously drawn arc's vertex|
Shape inferences help you pinpoint the moment when a rectangle becomes a square, for example. The following table lists all the shape inferences.
|Shape Inference Type||What It Looks Like||What It Means|
|Square||A rectangle whose sides are all the same size|
|Golden Section||A rectangle whose properties match the Golden Ratio as found in mathematics and the arts|
|Half Circle, Quarter Circle, or Three-Quarter Circle||An arc that is exactly one half of a circle, one quarter circle, or three-quarters of a circle, respectively.|
|Arc Side and Center||An arc shows edge and center inferences when a drawing tool hovers on the arc.|
|Circle/Polygon Center||A circle shows and center inferences when a drawing tool hovers on the circle edge.|
Starting with SketchUp 2016, the appearance of inferences on-screen changed, as shown in the following video.
Locking inferences with a keyboard
By locking inferences, you can confidently draw along the direction you intend to draw. Another reason to lock an inference is to maintain one drawing direction while you reference geometry from another part of the model. That’s a more advanced move, but very helpful. The easiest way to lock an inference to the default axes directions is to use the arrow keys:
|Key||What it looks like|
|↑||Locks the drawing direction or drawing plane to the Blue axis|
|←||Locks the drawing direction or drawing plane to the Green axis|
|→||Locks the drawing direction or drawing plane to the Red axis. A good way to remember left from right is to say “Right locks Red.”|
|↓||Locks the drawing direction to the Blue axis Toggle to lock the parallel/perpendicular drawing direction or drawing plane to an inferenced edge or plane. Basically, anything that turns magenta. The drawing direction will turn magenta in color as well as the edge of face that is being inference.|
|Shift||Locks the drawing direction or drawing plane to the active drawing direction/plane. So if you’re drawing along the Blue axes and hold down Shift, the Blue inference will lock.|
|Shift+AltShift + Command||Holding shift to lock the drawing plane also locks the tool to the same face plane that is inferenced. For the Rotate and Protractor tools, however, press the Alt key (Microsoft Windows) or Command key (madOS) to free those tools so that you can move the center to another place in the model while maintaining the same drawing plane|
Some tools, like the circle and rotate tools, can lock to a plane (instead of a drawing direction) as shown below. For these tools you can lock the drawing plane by choosing the colored direction for the tool’s axis or “normal”.
Ensuring edges are aligned to axes
To ensure your edges align to axes, you may find it helpful to change the cursor to the axes colors. Or if you need to check the alignment of existing geometry, change your edges to the axes colors.
To change your cursor to axes colors, follow these steps:
- Select Window > Preferences (Microsoft Windows) or SketchUp > Preferences (macOS). The System Preferences dialog box appears.
- Select Window > Preferences (Microsoft Windows) or SketchUp > Preferences (macOS). The SketchUp Preferences" dialog box appears.
- Select the Drawing item on the left.
- In the Miscellaneous area of the Drawing panel, select the Display cross hairs checkbox.
- Click OK to close the System Preferences dialog box. The cursor displays cross hairs that are the color of the axes, as shown here.
- Click OK to close the SketchUp Preferences dialog box. The cursor displays cross hairs that are the color of the axes, as shown here.
To make the edges in your model reflect the axis color to which it is aligned, follow these steps:
- Select Window > Styles.
- In the Styles dialog box, select In Model from the drop-down list of styles libraries.
- Click the Edit tab.
- Click the Edge Settings icon, shown in the figure.
- From the Color drop-down list, select By axis. The colors of the edges in your model change to reflect their alignment to the axes (unless an edge isn’t aligned to an axis, and then the edge color does not change). The following figure shows which edges are (and are not) aligned to the three axes. |
Follow the directions in the dialog box after pressing the <START> button. The <EXPLAIN> button may be pressed to see how to do the example.
When Compare Fractions starts, you will be given two fractions to compare, as in the example below:
You are to decide if the fraction on the left is less than, equal to, or greater than the fraction on the right. You will choose < for less than, = for equal to, or > for greater than. Shown in the dialog box are your choices of <, >, or =.
If the denominators are the same, the fraction with the larger numerator is larger and if the numerators are the same, the fraction with the larger denominator is smaller.
The following compare fractions illustration was made by Compare Fractions With Circles Designer:
The fractions 2⁄5 and 1⁄2 are pictured below:
Let's make the denominators the same so that we can compare the numerators. Fractions with the same denominators are like fractions.
Here, we will introduce the idea of the least common denominator or LCD. LCD is an idea that will be used in comparing, adding, and subtracting fractions. The LCD is the smallest number that both 5 and 2 will divide into evenly. Ten is the LCD for the fractions 2⁄5 and 1⁄2 because both denominators 5 and 2 divide evenly into 10.;
Once the LCD is found, each fraction is written with the LCD. As you can see by the illustration, 2⁄5 is equal to 4⁄10 and 1⁄2 is equal to 5⁄10. Once each fraction is renamed with a common denominator, you only have to compare the numerators. The larger the numerator the larger the fraction.
See the program RENAME IN HIGHER TERMS for more information on renaming fractions.
One way to determine the LCD is to see if the smaller denominator 2 will divide evenly into the larger denominator 5. If not, multiply the larger denominator by 2 to get 10. Will the smaller denominator 5 divide into 10? Yes, so 10 is the LCD. If not, multiply the larger denominator by 3, then 4, etc. until the smaller denominator divides into the product.
Another method is to multiply the two denominators and then divide that product by the greatest common factor(GCF) of the two denominators. The greatest common factor is the largest number that will divide evenly into the two denominators. In the example above the GCF of 5 and 2 is 1. The product of 5 and 2 is 10. Divide 10 by 1 and you get 10 for the LCD.
Also, you can see that 2⁄5 is less than half the circle so 2⁄5 is smaller.
Choose the < (less than) button if you think the first fraction is smaller than the second. Choose the = (equal button) if you think the two fractions are the same size. Choose the > (greater than) if you think the fraction is larger. If correct,
circles showing the comparative sizes of the two fractions will
appear. Press the <EXPLAIN> button to see each fraction with the common denominator.
For more instruction on comparng fractions go to How To Compare Fractions.
After you press the <, =, or > button you may press the <REPORT> button. The report will ask for your name but you may submit a code for your name. This report will give same results as on the dialog box. The report may be printed or e-mailed. |
What is a Function? The Difference between Functions and Relations 03:51 minutes
Test your knowledge
<b>Learn with fun & improve your grades
Start your FREE trial now and get instant access to this video and...
study at your own pace — with more videos that break down even the most difficult topics in easy-to-understand chunks of knowledge.
increase your confidence in class — by practicing before tests or exams with our fun interactive Practice Problems.
practice whenever and wherever — Our PDF Worksheets fit the content of each video. Together they create a well-round learning experience for you.
Transcript What is a Function? The Difference between Functions and Relations
Herman the German is at the end of his vacation in Japan, but he forgot to buy souvenirs! He decides to get a souvenir from one of the vending machines on one of the many touristy streets. Herman walks up to a pair of vending machines that look similar, but not exactly the same. The vending machines sell the same items and the keypads are the same.
Herman remembers that f(x) is math code for function notation and that the mark on the other vending machine is called a relation mapping diagram. What do functions and relations have to do with vending machines? Herman decides to go to the relation vending machine. He does feel lucky. Herman puts in 100 ¥. He chooses the Lap Pillow and enters E3 in the keypad.
What's happening? Why is the vending machine giving him a Noodle Eating Guard? that's also labeled E3? Upon closer inspection, Herman notices that there's something curious about this particular vending machine. There are several items that are labeled E3, there are also a few items labeled I3.
And look at that! S7 is the only item with that label. Herman decides to get a Rocketcroc Toaster. Again, he puts in money and enters in S7. Perfect! Herman decides to give the other items another try. After all, he does feel lucky! Once again, Herman puts in 100 ¥, chooses an item and enters the number in the keypad. This time, he chooses B3 since there are only two items labeled B3. Herman gets the square watermelon. Nice, but he wanted a Mommagotcha. So he tries again.
This time, he gets the Mommagotcha! But wait he didn't do anything differently, but got two different items. Herman thinks back to math class and remembers his teacher telling him that relations were when each element in the domain is related to one or more items in the range. When he enters in the code for an item, any one of the items with the same label could come out. With relations, an element in the domain of inputs can be related to one or more items in the range of outputs. Enough of this nonsense. Herman is pressed for time and he can't hope for a cool souvenir.
Herman decides to use the vending machine labeled with the function notation. Surely this will act like it should. Herman remembers that the function notation version of y = x is f(x) = x. And, although the name of this function is 'f', some other common letters used in function notation are 'g' or 'h', these would be read 'g' of 'x' and 'h' of 'x', respectively.
But no matter how a function is written, it has three main parts. First there is an input, 'x', that is chosen out of a set of starting points called the domain. Then, the function changes each input into a unique output, f(x), the artist formerly known as 'y'. The outputs create a set called the range.
Herman's sure he can get what he wants. He has his eye on AR2, which is the selfie stick. This'll make the perfect gift for his girlfriend! You've gotta be kidding the item's not coming out!
Herman's got an idea...Well, that didn't work. What's this? Herman catches a glimpse of a tool machine...Maybe...just maybe...NO...no...this is definitely worse.
What is a Function? The Difference between Functions and Relations Übung
Du möchtest dein gelerntes Wissen anwenden? Mit den Aufgaben zum Video What is a Function? The Difference between Functions and Relations kannst du es wiederholen und üben.
Describe Herman's problems with the vending machine.
Every function is a relation, but not every relation is a function.
A function is a special kind of relation where for every input $x$ there is at most one output $y$.
He puts 100 yen in the machine and inserts B3 for the Mammagotcha. But he gets a square melon instead.
That's because there are several items with the same label, like Mammagotcha and the square watermelon with B3. There is also a label with only one item: S7 and the rocket-toaster.
With relations, each element in the domain is related to one or more items in the range.
Find three main facts about functions.
Each element of the left set is assigned to one element of the right set.
The following is an example of a function:
you $\rightarrow$ your age.
Keep in mind that not every relation is a function. For example:
you $\rightarrow$ the names of all your friends.
There are three main facts about functions:
- There is a set of all input values $x$, called the domain.
- A function $f(x)$ changes an input value $x$ into a unique output value $y$.
- The set of all output values $y$ is called the range.
Explain the difference between functions and relations.
The following relation is a function:
you $\rightarrow$ number of brothers and sisters you have
The following relation is not a function:
you $\rightarrow$ the names of your brothers and sisters
Every function is a relation, but not the other way round.
Both functions and relations have a set of inputs, called the domain, and a set of outputs, called the range.
For any relation, for each element of the domain you can have one or more elements in the range.
For a function, for each element of the domain you can have at most one element in the range.
Determine if the assignment is a function or relation.
A person has a unique hair color but different people could have the same hair color.
For a function the assignment must be unique.
Here is an example for the difference between a relation and a function:
Paul $\rightarrow$ date of his birthday
is unique and thus a function.
Turning this assignment around, we get
date $\rightarrow$ the person who was born on this date
which isn't unique at all.
Peter, Paul, and Mary drink just one soda each, but two of them could drink the same kind of soda.
The difference of a relation and a function is the uniqueness of the assignment.
Each person has a unique hair color, his own, so this assignment is a function. But the other way round you can surely find more than one person with the same color hair. Thus this is a relation.
Social security card
Each person has a unique the social security number. So this is a function.
Paul has the email addresses firstname.lastname@example.org, email@example.com, and firstname.lastname@example.org. So three email addresses are assigned to Paul. This is a relation.
Each of the three drinks just one soda, so that's a function. Since two or three of them could order the same soda, this direction is just a relation.
Decide which mapping diagrams represent a function.
Keep the definition of a function in mind: for every element of the domain $x$ there exists at most one element in the range $y$ which is assigned to $x$.
You can imagine the definition of a function as follows: for each element in the domain there is at most one arrow.
If every $x$ in the domain is assigned to the same $y$, then the function is called a constant function.
Here is an example of a function:
The kinds of diagrams we are looking at are called mapping diagrams. On the left of each picture we have the domain, the set of inputs, and on the right we have the range, the set of outputs.
If all inputs $x$ are assigned to at most one output $y$ then the mapping diagram in question is that of a function.
For any mapping diagram of a function, you see that only one arrow starts at any element of the domain. However many arrows lead to an element in the range doesn't matter.
Thus, from left to the right, we have a relation, a function, a function, a function, and a relation.
Identify which statements are describing a function.
For a function, we must have that each input $x$ is assigned at most one output $y$.
Remember the important facts about the town given above.
For each house in town the address is uniquely assigned. Because of this, we can then view the assignment of addresses to houses as a function! This is how the postman knows where to deliver the mail.
Specifically, we have the function:
house $\rightarrow$ address,
where the address includes the street name, the house number, and the zip code.
If you leave out any of the three parts of the address, we don't have a function any longer, as the address no longer becomes unique. We know this from the given facts about the town.
- If we leave out the street name, then we know there exists more than one house with the number $30$ in town with the zip code 12345.
- If we leave out the house number, then we know there exists more than one house on a Beagle Street in the town with the zip code 12345.
- If we leave out the zip code, then we know there exists more than one house in town on Beagle Street with the house number 30. |
Time: 32 hours
College Credit Recommended
The purpose of this course is to introduce you to the subject of statistics as a science of data. There is data abound in this information age; how to extract useful knowledge and gain a sound understanding of complex data sets has been more of a challenge. In this course, we will focus on the fundamentals of statistics, which may be broadly described as the techniques to collect, clarify, summarize, organize, analyze, and interpret numerical information.
This course will begin with a brief overview of the discipline of statistics and will then quickly focus on descriptive statistics, introducing graphical methods of describing data. You will learn about combinatorial probability and random distributions, the latter of which serves as the foundation for statistical inference. On the side of inference, we will focus on both estimation and hypothesis testing issues. We will also examine the techniques to study the relationship between two or more variables; this is known as regression.
By the end of this course, you should gain a sound understanding of what statistics represent, how to use statistics to organize and display data, and how to draw valid inferences based on data by using appropriate statistical tools.
First, read the course syllabus. Then, enroll in the course by clicking "Enroll me". Click Unit 1 to read its introduction and learning outcomes. You will then see the learning materials and instructions on how to use them.
In today's technologically advanced world, we have access to large volumes of data. The first step of data analysis is to accurately summarize all of this data, both graphically and numerically, so that we can understand what the data reveals. To be able to use and interpret the data correctly is essential to making informed decisions. For instance, when you see a survey of opinion about a certain TV program, you may be interested in the proportion of those people who indeed like the program.
In this unit, you will learn about descriptive statistics, which are used to summarize and display data. After completing this unit, you will know how to present your findings once you have collected data. For example, suppose you want to buy a new mobile phone with a particular type of a camera. Suppose you are not sure about the prices of any of the phones with this feature, so you access a website that provides you with a sample data set of prices, given your desired features. Looking at all of the prices in a sample can sometimes be confusing. A better way to compare this data might be to look at the median price and the variation of prices. The median and variation are two ways out of several ways that you can describe data. You can also graph the data so that it is easier to see what the price distribution looks like.
In this unit, you will study precisely this; namely, you will learn both numerical and graphical ways to describe and display your data. You will understand the essentials of calculating common descriptive statistics for measuring center, variability, and skewness in data. You will learn to calculate and interpret these measurements and graphs.
Descriptive statistics are, as their name suggests, descriptive. They do not generalize beyond the data considered. Descriptive statistics illustrate what the data shows. Numerical descriptive measures computed from data are called statistics. Numerical descriptive measures of the population are called parameters. Inferential statistics can be used to generalize the findings from sample data to a broader population.
Completing this unit should take you approximately 7 hours.
Probabilities affect our everyday lives. In this unit, you will learn about probability and its properties, how probability behaves, and how to calculate and use it. You will study the fundamentals of probability and will work through examples that cover different types of probability questions. These basic probability concepts will provide a foundation for understanding more statistical concepts, for example, interpreting polling results. Though you may have already encountered concepts of probability, after this unit, you will be able to formally and precisely predict the likelihood of an event occurring given certain constraints.
Probability theory is a discipline that was created to deal with chance phenomena. For instance, before getting a surgery, a patient wants to know the chances that the surgery might fail; before taking medication, you want to know the chances that there will be side effects; before leaving your house, you want to know the chance that it will rain today. Probability is a measure of likelihood that takes on values between 0 and 1, inclusive, with 0 representing impossible events and 1 representing certainty. The chances of events occurring fall between these two values.
The skill of calculating probability allows us to make better decisions. Whether you are evaluating how likely it is to get more than 50% of the questions correct on a quiz if you guess randomly; predicting the chance that the next storm will arrive by the end of the week; or exploring the relationship between the number of hours students spend at the gym and their performance on an exam, an understanding of the fundamentals of probability is crucial.
We will also talk about random variables. A random variable describes the outcomes of a random experiment. A statistical distribution describes the numbers of times each possible outcome occurs in a sample. The values of a random variable can vary with each repetition of an experiment. Intuitively, a random variable, summarizing certain chance phenomenon, takes on values with certain probabilities. A random variable can be classified as being either discrete or continuous, depending on the values it assumes. Suppose you count the number of people who go to a coffee shop between 4 p.m. and 5 p.m. and the amount of waiting time that they spend in that hour. In this case, the number of people is an example of a discrete random variable and the amount of waiting time they spend is an example of a continuous random variable.
Completing this unit should take you approximately 8 hours.
The concept of sampling distribution lies at the very foundation of statistical inference. It is best to introduce sampling distribution using an example here. Suppose you want to estimate a parameter of a population, say the population mean. There are two natural estimators: 1. sample mean, which is the average value of the data set; and 2. median, which is the middle number when the measurements are arranged in ascending (or descending) order. In particular, for a sample of even size n, the median is the mean of the middle two numbers. But which one is better, and in what sense? This involves repeated sampling, and you want to choose the estimator that would do better on average. It is clear that different samples may give different sample means and medians; some of them may be closer to the truth than the others. Consequently, we cannot compare these two sample statistics or, in general, any two sample statistics on the basis of their performance with a single sample. Instead, you should recognize that sample statistics are themselves random variables; therefore, sample statistics should have frequency distributions by taking into account all possible samples. In this unit, you will study the sampling distribution of several sample statistics. This unit will show you how the central limit theorem can help to approximate sampling distributions in general.
Completing this unit should take you approximately 5 hours.
In this unit, you will learn how to use the central limit theorem and confidence intervals, the latter of which enables you to estimate unknown population parameters. The central limit theorem provides us with a way to make inferences from samples of non-normal populations. This theorem states that given any population, as the sample size increases, the sampling distribution of the means approaches a normal distribution. This powerful theorem allows us to assume that given a large enough sample, the sampling distribution will be normally distributed.
You will also learn about confidence intervals, which provide you with a way to estimate a population parameter. Instead of giving just a one-number estimate of a variable, a confidence interval gives a range of likely values for it. This is useful, because point estimates will vary from sample to sample, so an interval with certain confidence level is better than a single point estimate. After completing this unit, you will know how to construct such confidence intervals and the level of confidence.
Completing this unit should take you approximately 4 hours.
A hypothesis test involves collecting and evaluating data from a sample. The data gathered and evaluated is then used to make a decision as to whether or not the data supports the claim that is made about the population. This unit will teach you how to conduct hypothesis tests and how to identify and differentiate between the errors associated with them.
Many times, you need answers to questions in order to make efficient decisions. For example, a restaurant owner might claim that his restaurant's food costs 30% less than other restaurants in the area, or a phone company might claim that its phones last at least one year more than phones from other companies. In order to decide whether it would be more affordable to eat at the restaurant that "costs 30% less" or another restaurant in the area, or in order to decide which phone company to choose based on the durability of the phone, you will have to collect data to justify these claims. The process of hypothesis testing is a way of decision-making. In this unit, you will learn to establish your assumptions through null and alternative hypotheses. The null hypothesis is the hypothesis that is assumed to be true and the hypothesis you hope to nullify, while the alternative hypothesis is the research hypothesis that you claim to be true. This means that you need to conduct the correct tests to be able to accept or reject the null hypothesis. You will learn how to compare sample characteristics to see whether there is enough data to accept or reject the null hypothesis.
Completing this unit should take you approximately 4 hours.
In this unit, we will discuss situations in which the mean of a population, treated as a variable, depends on the value of another variable. One of the main reasons why we conduct such analyses is to understand how two variables are related to each other. The most common type of relationship is a linear relationship. For example, you may want to know what happens to one variable when you increase or decrease the other variable. You want to answer questions such as, "Does one variable increase as the other increases, or does the variable decrease?” For example, you may want to determine how the mean reaction time of rats depends on the amount of drug in bloodstream.
In this unit, you will also learn to measure the degree of a relationship between two or more variables. Both correlation and regression are measures for comparing variables. Correlation quantifies the strength of a relationship between two variables and is a measure of existing data. On the other hand, regression is the study of the strength of a linear relationship between an independent and dependent variable and can be used to predict the value of the dependent variable when the value of the independent variable is known.
Completing this unit should take you approximately 4 hours.
This study guide will help you get ready for the final exam. It discusses the key topics in each unit, walks through the learning outcomes, and lists important vocabulary terms. It is not meant to replace the course materials!
Course Feedback Survey
Please take a few minutes to give us feedback about this course. We appreciate your feedback, whether you completed the whole course or even just a few resources. Your feedback will help us make our courses better, and we use your feedback each time we make updates to our courses.
If you come across any urgent problems, email email@example.com or post in our discussion forum.
Certificate Final Exam
Take this exam if you want to earn a free Course Completion Certificate.
To receive a free Course Completion Certificate, you will need to earn a grade of 70% or higher on this final exam. Your grade for the exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again as many times as you want, with a 7-day waiting period between each attempt.
Once you pass this final exam, you will be awarded a free Course Completion Certificate.
- Receive a grade
Saylor Direct Credit
Take this exam if you want to earn college credit for this course. This course is eligible for college credit through Saylor Academy's Saylor Direct Credit Program.
The Saylor Direct Credit Final Exam requires a proctoring fee of $5. To pass this course and earn a Proctor-Verified Course Certificate and official transcript, you will need to earn a grade of 70% or higher on the Saylor Direct Credit Final Exam. Your grade for this exam will be calculated as soon as you complete it. If you do not pass the exam on your first try, you can take it again a maximum of 3 times, with a 14-day waiting period between each attempt.
We are partnering with SmarterProctoring to help make the proctoring fee more affordable. We will be recording you, your screen, and the audio in your room during the exam. This is an automated proctoring service, but no decisions are automated; recordings are only viewed by our staff with the purpose of making sure it is you taking the exam and verifying any questions about exam integrity. We understand that there are challenges with learning at home - we won't invalidate your exam just because your child ran into the room!
- Desktop Computer
- Chrome (v74+)
- Webcam + Microphone
- 1mbps+ Internet Connection
Once you pass this final exam, you will be awarded a Credit-Recommended Course Completion Certificate and can request an official transcript.
- Desktop Computer |
Wednesday, August 15th 2018
Geometry Power Standards Resources
SPI 3108.1.4 Use definitions, basic postulates, and theorems about points, lines, angles, and planes to write/complete proofs and/or to solve problems.
Interactive: Parallel Lines
Interactive: Angles and Triangles
Interactive: Lines and Planes
Interactive: Conditional, Converse, Inverse, and Biconditional Statements
SPI 3108.3.2 Use coordinate geometry to prove characteristics of polygonal figures.
Interactive Lesson: Slope
Interactive Lesson: Distance Formula
Interactive Lesson: Midpoint Formula
SPI 3108.4.2 Define, identify, describe, and/or model plane figures using appropriate mathematical symbols (including collinear and non-collinear points, lines, segments, rays, angles, triangles, quadrilaterals, and other polygons).
Interactive Lesson: Locus
Game: Geometry Review Bingo
Game: GeoCaching Review Game
Interactive Lessons: Quadrilaterals
Interactive: Quadrilaterals (Mulitple Lessons)
Interactive: Polygons (Multiple Lessons)
SPI 3108.4.7 Compute the area and/or perimeter of triangles, quadrilaterals and other polygons when one or more additional steps are required (e.g. find missing dimensions given area or perimeter of the figure, using trigonometry).
Interactive: Area and Perimeter Explorer
Interactive Lesson: Area of Hexagon
Interactive Lesson: Area of Octagon
Interactive Lesson: Area of Pentagon
SPI 3108.4.8 Solve problems involving area, circumference, area of a sector, and/or arclength of a circle.
Interactive: Area of a Circle
Interactive: Area of Sectors
Interactive Lesson: Arc Length and other Circle Lessons
SPI 3108.4.9 Use right triangle trigonometry and cross-sections to solve problems involving surface areas and/or volumes of solids.
Interactive Lesson: Surface Area and Volume
Interactive: Surface Area and Volume
Interactive Lessons: Surface Area and Volume (Multiple Activities)
SPI 3108.4.12 Solve problems involving congruence, similarity, proportional reasoning and/or scale factor of two similar figures or solids.
Interactive: Proofs of Congruent Triangles
Interactive: Proportionality (Multiple Lessons)
Video Lesson: Proving Triangles Congruent Using SSS,SAS,AAS,ASA
Examples: Dimensional Analysis Problems
Video Lessons: Volume
Video: Dimensional Analysis (scroll down)
SPI 3108.4.14 Use properties of right triangles to solve problems (such as involving the relationship formed when the altitude to the hypotenuse of a right triangle is drawn).
Interactive: Altitude of a Right Triangle
Interactive: Special Triangle Segments
SPI 3108.4.15 Determine and use the appropriate trigonometric ratio for a right triangle to solve a contextual problem.
Interactive Applet: Trig Ratios
Quiz Applet: Trig Countdown
Interactive Lesson: Triangles and Waves
Geometry Test (Free)
Geometry Facts Flash Cards (($.99)
Khan Academy - Geometry (Free)
Geometry ++ 2D Edition ($.99)
Geometry ++ 3D Edition ($.99)
Tangram Puzzle Pro ($.99)
Pattern Blocks ($.99)
Graphing Calculator ($1.99)
ACT Math Test Practice ($3.99) |
Delaware Indians. The name is derived from that of Delaware River, which in turn, was named for Lord Delaware, second governor of Virginia. Also called:
Abnaki or Wabanaki, “Easterners,” from their position relative to many other Algonquian tribes. (See Abnaki under Maine, Wampanoag under Massachusetts, and Wappinger under New York.)
A-ko-tca-ka’nlsn, “One who stammers in his speech,” the Mohawk name.
The Oneida and Tuscarora names were similar.
Anakwaneki, Cherokee name, an attempt at Wabanaki.
Lenni Lenape (their own name), meaning “true men,” or “standard men.” Loup, “wolf,” so called by the French.
Mochomes, “grandfather,” name given by those Algonquian tribes which claimed descent from them.
Nar-wah-ro, Wichita name.
Renni Renape, a form of Lenni Lenape.
Tca-ka’nen, shortened form of Mohawk name given above. (The names
in the languages of the other four Iroquois tribes are about the same).
Enter a grandparent's name to get started.
Connections. The Delaware belonged to the Algonquian linguistic stock, their closest relatives being the Nanticoke, Conoy, and Powhatan Indians to the south and the Mahican, Wappinger, and southern New England Indians on the north. The dialect of the northernmost of their major divisions, the Munsee, differed considerably from that of the southern groups.
Location. The Delaware occupied all of the State of New Jersey, the western end of Long Island, all of Staten and Manhattan Islands and neighboring parts of the mainland, along with other portions of New York west of the Hudson, and parts of eastern Pennsylvania, and northern Delaware. (See also Delaware, Illinois, Indiana, Kansas, Maryland and the District of Columbia, Missouri, New York, Ohio, Pennsylvania, Oklahoma, and the Munsee under Kansas, Oklahoma, and Wisconsin.)
There were three major divisions or subtribes, the Munsee in northern New Jersey and adjacent portions of New York west of the Hudson, the Unalachtigo in northern Delaware, southeastern Pennsylvania, and southern New Jersey, and the Unami in the intermediate territory, extending to the western end of Long Island. Each comprised a great many minor divisions which it is not always easy to classify under the three main heads.
As Munsee may probably be reckoned the following:
Catskill, on Catskill Creek, Greene County, N. Y.
Mamekoting, in Mamakating Valley, west of the Shawangunk Mountains, N. Y. Minisink, on the headwaters of Delaware River in the southwestern part of Ulster
and Orange Counties, N. Y., and the adjacent parts of New Jersey and Pennsylvania.
Waranawonkong, in the country watered by the Esopus, Wallkill, and Shawangunk Creeks, mainly in Ulster County, N. Y.
Wawarsink, centered about the junction of Wawarsing and Rondout Creeks,
Ulster County, N. Y.
We may class as Unami the following:
Aquackanonk, on Passaic River, N. J., and lands back from it including the tract called Dundee in Passsaic.
Assunpink, on Stony Creek near Trenton.
Axion, on the eastern bank of Delaware River between Rancocas Creek and Trenton.
Calcefar, in the interior of New Jersey between Rancocas Creek and Trenton.
Canarsee, in Kings County, Long Island, on the southern end of Manhattan Island, and the eastern end of Staten Island, N. Y.
Gachwechnagechga, on Lehigh River, Pa.
Hackensack, in the valleys of Hackensack and Passaic Rivers.
Haverstraw, on the western bank of the lower Hudson, in Rockland County, N. Y.
Meletecunk, in Monmouth County.
Mosilian, on the eastern bank of Delaware River about Trenton.
Navasink, on the highlands of Navesink, claiming the land from Barnegat to the Raritan.
Pompton, on Pompton Creek.
Raritan, in the valley of Raritan River and on the left bank of Delaware River as far down as the falls at Trenton.
Beckgawawane, on the upper part of Manhattan Island and the adjacent mainland of New York west of the Bronx.
Tappan, on the western bank of Hudson River in Rockland County, N. Y., and Bergen County.
Waoranec, near Esopus Creek, Ulster County, N. Y.
The following may be considered as Unalachtigo, though I am in some doubt about the Neshamini:
Amimenipaty, at site of a large pigment plant of the Du Pont Company at Edgemoor, Del.
Asomoche, on the eastern bank of Delaware River between Salem and Camden. Chikohoki, at site of Crane Brook Church, on west side of Delaware River near its junction with the Christanna River.
Eriwonec, about Old Man’s Creek in Salem or Gloucester County. Hopokohacking, on site now occupied by Wilmington, Del. Kahansuk, about Low Creek, Cumberland County.
Manta, about Salem Creek.
Memankitonna, on the present site of Claymont, Del., on Naaman’s Creek. Nantuxet, in Pennsylvania and Delaware.
Naraticon, in southern New Jersey, probably on Raccoon Creek. Neshamini, on Neshaminy Creek, Bucks County, Pa.
Okahoki, on Ridley and Crum Creeks, Delaware County, Pa.
Passayonk, on Schuylkill River, Pa., and along the western bank of Delaware River, perhaps extending into Delaware.
Shackamaxon, on the site of Kensington, Philadelphia, Pa.
Siconesse, on the eastern bank of Delaware River a short distance above Salem. Tirans, on the northern shore of Delaware Bay about Cape May or in Cumberland
Yacomanshaghking, on a small stream about the present Camden.
It will not be practicable to separate the villages belonging to the three great divisions in all cases. The following are entered in the Handbook of American Indians (Hodge, 1907, 1910)
Achsinnink, Unalachtigo village on Hocking River, Ohio, about 1770.
Ahasimus, probably Unami, in northern New Jersey.
Alamingo, a village, probably Delaware, on Susquehanna River.
Allaquippa, possible name of a settlement at the mouth of the Youghiogheny River, Pa., in 1755.
Anderson’s Town, on the south side of White River about Anderson, Ind.
Au Glaize, on a southeastern branch of Maumee River, Ohio.
Bald Eagle’s Nest, on the right bank of Bald Eagle Creek near Milesburg, Pa. Beaversville, near the junction of Buggy Creek and Canadian River, Okla. Beavertown, on the east side of the extreme eastern head branch of Hocking River near Beavertown, Ohio.
Black Hawk, probably Delaware, about Mount Auburn, Shelby County, Ind.
Black Leg’s Village, probably Delaware, on the north bank of Conemaugh River in the southeastern part of Armstrong County, Pa.
Buckstown, probably Delaware, on the southeast side of White River, about 3 miles east of Anderson, Ind.
Bulletta Town, probably Delaware, in Coshocton County, Ohio, on Muskingum River about halfway between Walhonding River and Tomstown.
Cashiehtunk, probably Munsee, on Delaware River near the point where it is met by the New Jersey State line.
Catawaweshink, probably Delaware, on or near Susquehanna River, near Big Island, Pa.
Chikohoki, a Manta village on the site of Burlington, Burlington County, N. J. Chilohocki, probably Delaware, on Miami River, Ohio.
Chinklacamoose, probably Delaware, on the site of Clearfield, Pa. Clistowacka, near Bethlehem, Pa.
Communipaw, village of the Hackensack, at Communipaw. Conemaugh, probably Delaware, about Conemaugh, Pa. Coshocton, on the site of Coshocton, Ohio.
Crossweeksung, in Burlington County, probably about Crosswicks.
Custaloga’s Town, Unalachtigo, two villages, one near French Creek, opposite Franklin, Pa., the other on Walhonding River, near Killbucks Creek in Coshocton County, Ohio.
Edgpiiliik, in western New Jersey.
Eriwonec, about Old Man’s Creek in Salem or Gloucester County.
Frankstown, probably Delaware, about Frankstown, Pa.
Friedenshütten, a Moravian mission town on Susquehanna River a few miles below Wyalusing, probably in Wyoming County, Pa.
Friedensstadt, in Beaver County, Pa., probably near Darlington.
Gekelemukpechuenk, in Ohio, and perhaps identical with White Eyes’ Town. Gnadenhütten, three Moravian Mission villages, one on the north side of Mahoning Creek near its junction with the Lehigh about the present Lehighton; a second on the site of Weissport, Carbon County, Pa.; and a third on the Muskingum River near the present Gnadenhutten, Ohio. (Brinton (1885) says there were two more towns of the same name.)
Goshgoshunk, with perhaps some Seneca, on Allegheny River about the upper part of Venango County, Pa.
Grapevine Town, perhaps Delaware, 8 miles up Captina River, Belmont County, Ohio.
Greentown, on the Black Fork of Mohican River near the boundary of Richland and Ashland Counties, Ohio.
Gweghkongh, probably Unami, in northern New Jersey, near Staten Island, or on the neighboring New York mainland.
Hespatingh, probably Unami, apparently in northern New Jersey, and perhaps near Bergen or Union Hill.
Hickorytown, probably about East Hickory or West Hickory, Pa. Hockhocken, on Hocking River, Ohio.
Hogstown, between Venango and Buffalo Creek, Pa., perhaps identical with Kuskuski.
Jacobs Cabins, probably Delaware, on Youghiogheny River, perhaps near Jacobs Creek, Fayette County, Pa.
Jeromestown, near Jeromesville, Ohio.
Kalbauvane, probably’ Delaware, on the headwaters of the west branch of Susquehanna River, Pa.
Kanestio, Delaware and other Indians, on the upper Susquehanna River, near Kanestio Creek in Steuben County, N. Y.
Kanhangton, about the mouth of Chemung River in the northern part of Bradford County, Pa.
Katamoonchink, perhaps the name of a Delaware village near West Whiteland, Chester County, Pa.
Kickenapawling, probably Delaware and Iroquois, at the junction of Stony Creek with Conemaugh River, approximately on the site of Johnstown, Pa.
Kiktheswemud, probably Delaware, near Anderson, Ind., perhaps identical with Buckstown or Little Munsee Town.
Killbuck’s Town, on the east side of Killbuck Creek, about 10 miles south of Wooster, Ohio.
Kishakoquilla, two towns successively occupied by a chief of the name, one about Kishacoquillas, Mifflin County, Pa., the other on French Creek about 7 miles below Meadville, Crawford County, Pa.
Kiskiminetas, on the south side of lower Kiskiminetas Creek, near its mouth, Westmoreland County, Pa.
Kiskominitoes, on the north bank of Ohio River between the Hocking and Scioto Rivers, Ohio.
Kittanning, divided into several settlements and mixed with Iroquois and Caughnawaga, near Kittanning on Allegheny River, Armstrong County, Pa.
Kohhokking, near “Painted Post” in Steuben County, N. Y., or Elmira, Chemung County, N. Y.
Kuskuski, with Iroquois, on Beaver Creek, near Newcastle, in Lawrence County, Pa.
Languntennenk, Moravian Delaware near Darlington, Beaver County, Pa. Lawunkhannek, Moravian Delaware on Allegheny River above Franklin, Venango County, Pa.
Lichtenau, Moravian Delaware on the east side of Muskingum River, 3 miles below Coshocton, Ohio.
Little Munsee Town, Munsee, a few miles east of Anderson, Ind.
Macharienkonck, Minisink, in the bend of Delaware River, Pike County, Pa., opposite Port Jervis.
Macocks, some distance north of Chikohoki, which was probably at Wilmington, Del., perhaps the village of the Okahoki in Pennsylvania.
Mahoning, on the west bank of Mahoning River, perhaps between Warren and Youngstown, Ohio.
Mechgachkamic, perhaps Unami, probably near Hackensack, N. J. Meggeckessou, on Delaware River at Trenton Falls, N. J. Meniolagomeka, on Aquanshicola Creek, Carbon County, Pa.
Meochkonck, Minisink, on the upper Delaware River in southeastern New York. Minisink, Minisink, in Sussex County, N. J., near where the State line crosses Delaware River.
Munceytown, Munsee, on Thames River northwest of Brantford, Ontario, Canada. Muskingum, probably Delaware, on the west bank of Muskingum River, Ohio.
Nain, Moravian Indians, principally Delaware, near Bethlehem, Pa.
Newcomerstown, village of Chief Newcomer, about the site of New Comerstown, Tuscarawas County, Ohio.
Newtown, the name of three towns probably of the Delaware and Iroquois, one on the north bank of Licking River, near the site of the present Zanesville, Ohio; a second about the site of Newtown, Ohio; and a third on the west side of Wills Creek near the site of Cambridge, Ohio.
Nyack, probably Canarsee, about the site of Fort Hamilton, Kings County, Long Island, afterward removed to Staten Island.
Nyack, Unami probably, on the west bank of Hudson River about the present Nyack, N. Y.
Ostonwackin, with Cayuga, Oneida, and other Indians, on the site of the present Montoursville, Pa.
Outaunink, Munsee, on the north bank of White River, opposite Muncie, Ind. Owl’s Town, probably Delaware, on Mohican River, Coshocton County, Ohio.
Pakadasank, probably Munsee, about the site of Crawford, Orange County, N. Y.
Papagonk, probably Munsee, in Ulster County, N. Y., also placed near Pepacton, Delaware County, N. Y.
Passycotcung, on Chemung River, N. Y.
Peckwes, Munsee or Shawnee, about 10 miles from Hackensack.
Pematuning, probably Delaware, near Shenango, Pa.
Pequottink, Moravian Delaware, on the east bank of Huron River, near Milan, Ohio.
Playwickey, probably Unalachtigo, in Bucks County, Pa.
Pohkopophunk, in eastern Pennsylvania, probably in Carbon County.
Queenashawakee, on the upper Susquehanna River, Pa.
Raincock, Rancocas, in Burlington County.
Remahenonc, perhaps Unami, near New York City.
Roymount, near Cape May.
Salem, Moravian Delaware, on the west bank of Tuscarawas River, 1½ miles southwest of Port Washington, Tuscarawas County, Ohio.
Salt Lick, probably Delaware, on Mahoning River near Warren, Ohio.
Sawcunk, with Shawnee and Mingo, near the mouth of Beaver Creek, about the site of the present Beaver, Pa.
Sawkin, on the east bank of Delaware River in New Jersey.
Schepinaikonck, Minisink, perhaps in Orange County, N. Y.
Schipston, probably Delaware, at the head of Juniata River, Pa.
Schoenbrunn, Moravian Munsee, about 2 miles below the site of New Philadelphia, Ohio.
Seven Houses, near the ford of Beaver Creek just above its mouth, Beaver County, Pa.
Shackamaxon, on the site of Kensington, Philadelphia, Pa.
Shamokin, with Shawnee, Iroquois, and Tutelo, on north sides of Susquehanna River including the island at the site of Sunbury, Pa.
Shannopin’s Town, on Allegheny River about 2 miles above its junction with the Monongahela.
Shenango, with other tribes, the name of several towns, one on the north bank of Ohio River a little below Economy, Pa.; one at the junction of Conewango and the Allegheny; and one some distance up Big Beaver, near Kuskuski (q. v.).
Sheshequin, with Iroquois, about 6 miles below Tioga Point, Bradford County, Pa. Soupnapka, on the east bank of Delaware River in New Jersey.
Three Legs Town, named from a chief, on the east bank of Muskingum River a few miles south of the mouth of the Tuscarawas, Coshocton County, Ohio.
Tioga, with Nanticoke, Mahican, Saponi, Tutelo, etc., on the site of Athens, Pa.
Tom’s Town, on Scioto River, a short distance below the present Chillicothe and near the mouth of Paint Creek, Ohio.
Tullihas, with Mahican and Caughnawaga, on the west branch of Muskingum River, Ohio, about 20 miles above the forks.
Tuscarawas, with Wyandot, on Tuscarawas River, Ohio, near the mouth of Big Sandy River.
Venango, with Seneca, Shawnee, Wyandot, Ottawa, etc., at the site of Franklin, Venango County, Pa.
Wechquetank, Moravian Delaware, about 8 miles beyond the Blue Ridge, northwest from Bethlehem, Pa., probably near the present Mauch Chunk.
Wekeeponall, on the west bank of the Susquehanna River, about the mouth of Loyalstock Creek in Lycoming County, Pa., probably identical with Queen Esther’s Town.
Walagamika, on the site of Nazareth, Lehigh County, Pa.
White-eyes Village, named from a chief, on the site of Duncan’s Falls, 9 miles below Zanesville, Ohio.
White Woman’s Town, near the junction of Walhonding and Killbuck Rivers, about 7 miles northwest of the forks of the Muskingum River, in Coshocton County, Ohio.
Will’s Town, on the east bank of Muskingum River at the mouth of Wills Creek,
Muskingum County, Ohio.
Woapikamikunk, in the valley of White River, Ind.
Wyalusing, Munsee and Iroquois, on the site of Wyalusing, Bradford County, Pa. Wyoming, with Iroquois, Shawnee, Mahican, and Nanticoke; later entirely Delaware and Munsee; principal settlement at the site of Wilkes-Barre, Pa.
History. The traditional history of the Delaware set forth in the famous Walam Olum (see Brinton, 1882-85, vol. 5), gave them an origin somewhere northwest of their later habitat. They were found by the earliest white voyagers in the historic seats above given. The Dutch came into contact with the Unami and Munsee Delaware in 1609 and the Swedes with the Unalachtigo in 1637. Both were succeeded by the English in 1664, but the most notable event in Delaware history took place in 1682 when these Indians held their first council with William Penn at what is now Germantown, Philadelphia. About 1720 the Iroquois assumed dominion over them and they were gradually crowded west by the white colonists, reaching the Allegheny as early as 1724, and settling at Wyoming and other points on the Susquehanna about 1742. In 1751, by invitation of the Huron, they began to form villages in eastern Ohio, and soon the greater part of them were on the Muskingum and other Ohio streams. Backed by the French and by other western tribes, they now freed themselves from Iroquois control and opposed the English settlers steadily until the treaty of Greenville in 1795. Notable missionary work was done among them by the Moravians in the seventeenth and eighteenth centuries. About 1770 they received permission from the Miami and Piankashaw to settle between the Ohio and White Rivers, Ind. In 1789, by permission of the Spanish government, a part moved to Missouri and later to Arkansas, along with a band of Shawnee, and by 1820 they had found their way to Texas. By 1835 most of the bands had been gathered on a reservation in Kansas, but in 1867 the greater part of these removed to the present Oklahoma, where some of them occupied a corner of the Cherokee Nation. Others are with the Caddo and Wichita in southwestern Oklahoma, a few Munsee are with the Stockbridges in Wisconsin, and some are scattered in other parts of the United States.
In Ontario, Canada, are three bands, the Delawares of Grand River, near Hagersville; the Moravians of the Thames, near Bothwell; and the Munceys of the Thames, near Muncey nearly all of whom are of the Munsee division.
Population. Mooney (1928) estimates that there were 8,000 Delaware in 1600 not including the Canarsee of Long Island; estimates made during the eighteenth century vary between 2,400 and 3,000; nineteenth-century estimates are much lower; and the United States Census of 1910 returned 914 Delawares and 71 Munsee, or a total of 985, to which must be added the bands in Canada, making perhaps 1,600 all together. 140 Delaware were reported on the Wichita Reservation, Okla., in 1937.
Connection in which they have become noted. The Delaware are noted as one of the very few tribes which have come to be known by an English term, and as one of the chief antagonists of the Whites while the latter were forcing their way westward, but in later years as furnishing the most reliable scouts in White employ. A different sort of fame has been attained by one of their early chiefs, Tamenend, whose name, in the form Tammany, was applied to a philanthropic society, a place of meeting, and a famous political organization. Delaware chiefs signed the famous treaty with Penn under the oak at Shackamaxon, and their tribes occupied Manhattan Island and the shores of New York Harbor at the arrival of the Dutch. The name Delaware has been used for postoffices in Arkansas, Iowa, Kentucky, Missouri, New Jersey, Ohio, and Oklahoma, besides the State of Delaware. Lenape is a post village in Leavenworth County, Kans., and Lenapah in Nowata County, Okla. |
What Sort of Nation Middle
1. What sort of nation has Australia
been? What sort of nation is it today?
Australian identity yesterday and today
2. How has immigration shaped the kind of
nation we are?
The make-up of Australias population:
Australias immigration policies
Marketing an immigration policy
Your class population
Advice for new immigrants to Australia
Assimilation to multiculturalism
Conditions of citizenship
3. How do economic factors shape and reflect
the kind of nation we are?
Changes to the economy good or bad?
Changes to the workforce
Education and work
Types of work
Jobs and their value
Controlling the market
Should the same laws and regulations apply to
Trade and work
4. What responsibilities do individuals,
communities and governments have for the welfare
of Australian citizens?
Social security different kinds
What are the governments
An international comparison
5. What kind of country do we want Australia
What do you value?
The unit begins with a number of
images of Australia. Examples include
Aboriginal art, extracts from speeches, poetry,
early Australian art, drawings and photographs.
Students then interpret data to decide how
immigration has helped shape the type of nation
Australia is today, and then analyse articles on
immigration policies. Students investigate the
difference between assimilation and
The next section uses statistics and stimulus
material to analyse the economic factors that
shape the nation. The unit concludes with the
question What responsibilities do
individuals, communities and governments have for
the welfare of Australian citizens?
Teacher should appreciate having the
illustrations and questions for students to
reflect on their own and others ideas of
Australia as a nation.
Students evaluate evidence from the past to
demonstrate how such accounts reflect the culture
in which they were constructed.
Students collaboratively identify the values
underlying contributions by diverse individuals
and groups in Australian or Asian
Students develop criteria-based judgments
about the ethical behaviour of people in the
Students evaluate evidence of the ways in
which their personal history and the history of
others have been constructed.
Students produce or perform an account that
links their own histories and those of others.
Students use maps, tables and statistical data
to express predictions about the impact of change
Students use maps and graphs that interpret
data to suggest links between geographic features
of places and changes occurring within these
Students develop a proposal to promote a
socially just response to perceptions of culture
associated with a current issue.
Students describe specific instances of
cultural change resulting from government
legislation or policies that have impacted on
other cultural groups.
Students use surveys and structured interviews
to analyse community attitudes towards cultural
Students make practical suggestions for
improving productivity and working conditions in
an industry or business.
Students propose changes to economic,
political or legal systems to make them more
democratic and socially just.
Student suggests solutions to problems
involving inequitable distribution of power and
resources in a global context.
Students represent situations both before and
after a period of rapid change.
Students collaborate to locate and
systematically record information about the
contributions of people in diverse past settings.
Students use maps, diagrams and statistics to
justify placing value on environments in
Australia and the Asia-Pacific region.
Students share their sense of belonging to a
group to analyse cultural aspects that construct
Students describe how governments have caused
changes to particular groups.
Students express how dominant and marginalised
identities are constructed by influences
including the media.
Students evaluate the relationships between
government, economic or ecological systems.
Students use a structured decision-making
process to suggest participatory action regarding
a significant current environmental, business,
political or legal issue.
Economy and society
- Describe the development of the
- Illustrate how local, state, national and
international issues, elections and party
policy differences influence the
development of the economy
6.3 Analyse vocational pathways and education
and training requirements to develop possible
career paths and work opportunities
- Identify future job opportunities and
predicted labour market changes in
5.3 Explain key factors that influence the
- Analyse the role and impact of the
government, individuals and organisations
on economic activity including how they
interact to produce, market and consume
- Explain the elements of economics and the
factors that affect resource use
- Discuss how and why technology and
changing community values affect resource
Interrelationship between economic government
and legal activity.
Small business studies.
- Parliament at Work CD ROM. Budget
simulation play the role of the
- Stories of Democracy CD ROM. Begins
with visuals and text on British
heritage, and then provides opportunities
to investigate immigration debates,
womens political issues, Aboriginal
issues, and unemployment. No game. |
The New England colonies
Although lacking a charter, the founders of Plymouth in Massachusetts were, like their counterparts in Virginia, dependent upon private investments from profit-minded backers to finance their colony. The nucleus of that settlement was drawn from an enclave of English émigrés in Leiden, Holland (now in The Netherlands). These religious Separatists believed that the true church was a voluntary company of the faithful under the “guidance” of a pastor and tended to be exceedingly individualistic in matters of church doctrine. Unlike the settlers of Massachusetts Bay, these Pilgrims chose to “separate” from the Church of England rather than to reform it from within.
In 1620, the first year of settlement, nearly half the Pilgrim settlers died of disease. From that time forward, however, and despite decreasing support from English investors, the health and the economic position of the colonists improved. The Pilgrims soon secured peace treaties with most of the Indians around them, enabling them to devote their time to building a strong, stable economic base rather than diverting their efforts toward costly and time-consuming problems of defending the colony from attack. Although none of their principal economic pursuits—farming, fishing, and trading—promised them lavish wealth, the Pilgrims in America were, after only five years, self-sufficient.
Although the Pilgrims were always a minority in Plymouth, they nevertheless controlled the entire governmental structure of their colony during the first four decades of settlement. Before disembarking from the Mayflower in 1620, the Pilgrim founders, led by William Bradford, demanded that all the adult males aboard who were able to do so sign a compact promising obedience to the laws and ordinances drafted by the leaders of the enterprise. Although the Mayflower Compact has been interpreted as an important step in the evolution of democratic government in America, it is a fact that the compact represented a one-sided arrangement, with the settlers promising obedience and the Pilgrim founders promising very little. Although nearly all the male inhabitants were permitted to vote for deputies to a provincial assembly and for a governor, the colony, for at least the first 40 years of its existence, remained in the tight control of a few men. After 1660 the people of Plymouth gradually gained a greater voice in both their church and civic affairs, and by 1691, when Plymouth colony (also known as the Old Colony) was annexed to Massachusetts Bay, the Plymouth settlers had distinguished themselves by their quiet, orderly ways.
The Puritans of the Massachusetts Bay Colony, like the Pilgrims, sailed to America principally to free themselves from religious restraints. Unlike the Pilgrims, the Puritans did not desire to “separate” themselves from the Church of England but, rather, hoped by their example to reform it. Nonetheless, one of the recurring problems facing the leaders of the Massachusetts Bay colony was to be the tendency of some, in their desire to free themselves from the alleged corruption of the Church of England, to espouse Separatist doctrine. When these tendencies or any other hinting at deviation from orthodox Puritan doctrine developed, those holding them were either quickly corrected or expelled from the colony. The leaders of the Massachusetts Bay enterprise never intended their colony to be an outpost of toleration in the New World; rather, they intended it to be a “Zion in the wilderness,” a model of purity and orthodoxy, with all backsliders subject to immediate correction.
Test Your Knowledge
History Buff Quiz
The civil government of the colony was guided by a similar authoritarian spirit. Men such as John Winthrop, the first governor of Massachusetts Bay, believed that it was the duty of the governors of society not to act as the direct representatives of their constituents but rather to decide, independently, what measures were in the best interests of the total society. The original charter of 1629 gave all power in the colony to a General Court composed of only a small number of shareholders in the company. On arriving in Massachusetts, many disfranchised settlers immediately protested against this provision and caused the franchise to be widened to include all church members. These “freemen” were given the right to vote in the General Court once each year for a governor and a Council of Assistants. Although the charter of 1629 technically gave the General Court the power to decide on all matters affecting the colony, the members of the ruling elite initially refused to allow the freemen in the General Court to take part in the lawmaking process on the grounds that their numbers would render the court inefficient.
In 1634 the General Court adopted a new plan of representation whereby the freemen of each town would be permitted to select two or three delegates and assistants, elected separately but sitting together in the General Court, who would be responsible for all legislation. There was always tension existing between the smaller, more prestigious group of assistants and the larger group of deputies. In 1644, as a result of this continuing tension, the two groups were officially lodged in separate houses of the General Court, with each house reserving a veto power over the other.
Despite the authoritarian tendencies of the Massachusetts Bay colony, a spirit of community developed there as perhaps in no other colony. The same spirit that caused the residents of Massachusetts to report on their neighbours for deviation from the true principles of Puritan morality also prompted them to be extraordinarily solicitous about their neighbours’ needs. Although life in Massachusetts was made difficult for those who dissented from the prevailing orthodoxy, it was marked by a feeling of attachment and community for those who lived within the enforced consensus of the society.
Many New Englanders, however, refused to live within the orthodoxy imposed by the ruling elite of Massachusetts, and both Connecticut and Rhode Island were founded as a by-product of their discontent. The Rev. Thomas Hooker, who had arrived in Massachusetts Bay in 1633, soon found himself in opposition to the colony’s restrictive policy regarding the admission of church members and to the oligarchic power of the leaders of the colony. Motivated both by a distaste for the religious and political structure of Massachusetts and by a desire to open up new land, Hooker and his followers began moving into the Connecticut valley in 1635. By 1636 they had succeeded in founding three towns—Hartford, Windsor, and Wethersford. In 1638 the separate colony of New Haven was founded, and in 1662 Connecticut and Rhode Island merged under one charter.
Roger Williams, the man closely associated with the founding of Rhode Island, was banished from Massachusetts because of his unwillingness to conform to the orthodoxy established in that colony. Williams’s views conflicted with those of the ruling hierarchy of Massachusetts in several important ways. His own strict criteria for determining who was regenerate, and therefore eligible for church membership, finally led him to deny any practical way to admit anyone into the church. Once he recognized that no church could ensure the purity of its congregation, he ceased using purity as a criterion and instead opened church membership to nearly everyone in the community. Moreover, Williams showed distinctly Separatist leanings, preaching that the Puritan church could not possibly achieve purity as long as it remained within the Church of England. Finally, and perhaps most serious, he openly disputed the right of the Massachusetts leaders to occupy land without first purchasing it from the Native Americans.
The unpopularity of Williams’s views forced him to flee Massachusetts Bay for Providence in 1636. In 1639 William Coddington, another dissenter in Massachusetts, settled his congregation in Newport. Four years later Samuel Gorton, yet another minister banished from Massachusetts Bay because of his differences with the ruling oligarchy, settled in Shawomet (later renamed Warwick). In 1644 these three communities joined with a fourth in Portsmouth under one charter to become one colony called Providence Plantation in Narragansett Bay.
The early settlers of New Hampshire and Maine were also ruled by the government of Massachusetts Bay. New Hampshire was permanently separated from Massachusetts in 1692, although it was not until 1741 that it was given its own royal governor. Maine remained under the jurisdiction of Massachusetts until 1820.
The middle colonies
New Netherland, founded in 1624 at Fort Orange (now Albany) by the Dutch West India Company, was but one element in a wider program of Dutch expansion in the first half of the 17th century. In 1664 the English captured the colony of New Netherland, renaming it New York after James, duke of York, brother of Charles II, and placing it under the proprietary control of the duke. In return for an annual gift to the king of 40 beaver skins, the duke of York and his resident board of governors were given extraordinary discretion in the ruling of the colony. Although the grant to the duke of York made mention of a representative assembly, the duke was not legally obliged to summon it and in fact did not summon it until 1683. The duke’s interest in the colony was chiefly economic, not political, but most of his efforts to derive economic gain from New York proved futile. Indians, foreign interlopers (the Dutch actually recaptured New York in 1673 and held it for more than a year), and the success of the colonists in evading taxes made the proprietor’s job a frustrating one.
In February 1685 the duke of York found himself not only proprietor of New York but also king of England, a fact that changed the status of New York from that of a proprietary to a royal colony. The process of royal consolidation was accelerated when in 1688 the colony, along with the New England and New Jersey colonies, was made part of the ill-fated Dominion of New England. In 1691 Jacob Leisler, a German merchant living on Long Island, led a successful revolt against the rule of the deputy governor, Francis Nicholson. The revolt, which was a product of dissatisfaction with a small aristocratic ruling elite and a more general dislike of the consolidated scheme of government of the Dominion of New England, served to hasten the demise of the dominion.
Pennsylvania, in part because of the liberal policies of its founder, William Penn, was destined to become the most diverse, dynamic, and prosperous of all the North American colonies. Penn himself was a liberal, but by no means radical, English Whig. His Quaker (Society of Friends) faith was marked not by the religious extremism of some Quaker leaders of the day but rather by an adherence to certain dominant tenets of the faith—liberty of conscience and pacifism—and by an attachment to some of the basic tenets of Whig doctrine. Penn sought to implement these ideals in his “holy experiment” in the New World.
Penn received his grant of land along the Delaware River in 1681 from Charles II as a reward for his father’s service to the crown. The first “frame of government” proposed by Penn in 1682 provided for a council and an assembly, each to be elected by the freeholders of the colony. The council was to have the sole power of initiating legislation; the lower house could only approve or veto bills submitted by the council. After numerous objections about the “oligarchic” nature of this form of government, Penn issued a second frame of government in 1682 and then a third in 1696, but even these did not wholly satisfy the residents of the colony. Finally, in 1701, a Charter of Privileges, giving the lower house all legislative power and transforming the council into an appointive body with advisory functions only, was approved by the citizens. The Charter of Privileges, like the other three frames of government, continued to guarantee the principle of religious toleration to all Protestants.
Pennsylvania prospered from the outset. Although there was some jealousy between the original settlers (who had received the best land and important commercial privileges) and the later arrivals, economic opportunity in Pennsylvania was on the whole greater than in any other colony. Beginning in 1683 with the immigration of Germans into the Delaware valley and continuing with an enormous influx of Irish and Scotch-Irish in the 1720s and ’30s, the population of Pennsylvania increased and diversified. The fertile soil of the countryside, in conjunction with a generous government land policy, kept immigration at high levels throughout the 18th century. Ultimately, however, the continuing influx of European settlers hungry for land spelled doom for the pacific Indian policy initially envisioned by Penn. “Economic opportunity” for European settlers often depended on the dislocation, and frequent extermination, of the American Indian residents who had initially occupied the land in Penn’s colony.
New Jersey remained in the shadow of both New York and Pennsylvania throughout most of the colonial period. Part of the territory ceded to the duke of York by the English crown in 1664 lay in what would later become the colony of New Jersey. The duke of York in turn granted that portion of his lands to John Berkeley and George Carteret, two close friends and allies of the king. In 1665 Berkeley and Carteret established a proprietary government under their own direction. Constant clashes, however, developed between the New Jersey and the New York proprietors over the precise nature of the New Jersey grant. The legal status of New Jersey became even more tangled when Berkeley sold his half interest in the colony to two Quakers, who in turn placed the management of the colony in the hands of three trustees, one of whom was Penn. The area was then divided into East Jersey, controlled by Carteret, and West Jersey, controlled by Penn and the other Quaker trustees. In 1682 the Quakers bought East Jersey. A multiplicity of owners and an uncertainty of administration caused both colonists and colonizers to feel dissatisfied with the proprietary arrangement, and in 1702 the crown united the two Jerseys into a single royal province.
When the Quakers purchased East Jersey, they also acquired the tract of land that was to become Delaware, in order to protect their water route to Pennsylvania. That territory remained part of the Pennsylvania colony until 1704, when it was given an assembly of its own. It remained under the Pennsylvania governor, however, until the American Revolution.
The Carolinas and Georgia
The English crown had issued grants to the Carolina territory as early as 1629, but it was not until 1663 that a group of eight proprietors—most of them men of extraordinary wealth and power even by English standards—actually began colonizing the area. The proprietors hoped to grow silk in the warm climate of the Carolinas, but all efforts to produce that valuable commodity failed. Moreover, it proved difficult to attract settlers to the Carolinas; it was not until 1718, after a series of violent Indian wars had subsided, that the population began to increase substantially. The pattern of settlement, once begun, followed two paths. North Carolina, which was largely cut off from the European and Caribbean trade by its unpromising coastline, developed into a colony of small to medium farms. South Carolina, with close ties to both the Caribbean and Europe, produced rice and, after 1742, indigo for a world market. The early settlers in both areas came primarily from the West Indian colonies. This pattern of migration was not, however, as distinctive in North Carolina, where many of the residents were part of the spillover from the natural expansion of Virginians southward.
The original framework of government for the Carolinas, the Fundamental Constitutions, drafted in 1669 by Anthony Ashley Cooper (Lord Shaftesbury) with the help of the philosopher John Locke, was largely ineffective because of its restrictive and feudal nature. The Fundamental Constitutions was abandoned in 1693 and replaced by a frame of government diminishing the powers of the proprietors and increasing the prerogatives of the provincial assembly. In 1729, primarily because of the proprietors’ inability to meet the pressing problems of defense, the Carolinas were converted into the two separate royal colonies of North and South Carolina.
The proprietors of Georgia, led by James Oglethorpe, were wealthy philanthropic English gentlemen. It was Oglethorpe’s plan to transport imprisoned debtors to Georgia, where they could rehabilitate themselves by profitable labour and make money for the proprietors in the process. Those who actually settled in Georgia—and by no means all of them were impoverished debtors—encountered a highly restrictive economic and social system. Oglethorpe and his partners limited the size of individual landholdings to 500 acres (about 200 hectares), prohibited slavery, forbade the drinking of rum, and instituted a system of inheritance that further restricted the accumulation of large estates. The regulations, though noble in intention, created considerable tension between some of the more enterprising settlers and the proprietors. Moreover, the economy did not live up to the expectations of the colony’s promoters. The silk industry in Georgia, like that in the Carolinas, failed to produce even one profitable crop.
The settlers were also dissatisfied with the political structure of the colony; the proprietors, concerned primarily with keeping close control over their utopian experiment, failed to provide for local institutions of self-government. As protests against the proprietors’ policies mounted, the crown in 1752 assumed control over the colony; subsequently, many of the restrictions that the settlers had complained about, notably those discouraging the institution of slavery, were lifted.
British policy toward the American colonies was inevitably affected by the domestic politics of England; since the politics of England in the 17th and 18th centuries were never wholly stable, it is not surprising that British colonial policy during those years never developed along clear and consistent lines. During the first half century of colonization, it was even more difficult for England to establish an intelligent colonial policy because of the very disorganization of the colonies themselves. It was nearly impossible for England to predict what role Virginia, Maryland, Massachusetts, Connecticut, and Rhode Island would play in the overall scheme of empire because of the diversity of the aims and governmental structures of those colonies. By 1660, however, England had taken the first steps in reorganizing her empire in a more profitable manner. The Navigation Act of 1660, a modification and amplification of a temporary series of acts passed in 1651, provided that goods bound to England or to English colonies, regardless of origin, had to be shipped only in English vessels; that three-fourths of the personnel of those ships had to be Englishmen; and that certain “enumerated articles,” such as sugar, cotton, and tobacco, were to be shipped only to England, with trade in those items with other countries prohibited. This last provision hit Virginia and Maryland particularly hard; although those two colonies were awarded a monopoly over the English tobacco market at the same time that they were prohibited from marketing their tobacco elsewhere, there was no way that England alone could absorb their tobacco production.
The 1660 act proved inadequate to safeguard the entire British commercial empire, and in subsequent years other navigation acts were passed, strengthening the system. In 1663 Parliament passed an act requiring all vessels with European goods bound for the colonies to pass first through English ports to pay customs duties. In order to prevent merchants from shipping the enumerated articles from colony to colony in the coastal trade and then taking them to a foreign country, in 1673 Parliament required that merchants post bond guaranteeing that those goods would be taken only to England. Finally, in 1696 Parliament established a Board of Trade to oversee Britain’s commercial empire, instituted mechanisms to ensure that the colonial governors aided in the enforcement of trade regulations, and set up vice admiralty courts in America for the prosecution of those who violated the Navigation Acts. On the whole, this attempt at imperial consolidation—what some historians have called the process of Anglicization—was successful in bringing the economic activities of the colonies under closer crown control. While a significant amount of colonial trade continued to evade British regulation, it is nevertheless clear that the British were at least partially successful in imposing greater commercial and political order on the American colonies during the period from the late-17th to the mid-18th century.
In addition to the agencies of royal control in England, there were a number of royal officials in America responsible not only for aiding in the regulation of Britain’s commercial empire but also for overseeing the internal affairs of the colonies. The weaknesses of royal authority in the politics of provincial America were striking, however. In some areas, particularly in the corporate colonies of New England during the 17th century and in the proprietary colonies throughout their entire existence, direct royal authority in the person of a governor responsible to the crown was nonexistent. The absence of a royal governor in those colonies had a particularly deleterious effect on the enforcement of trade regulations. In fact, the lack of royal control over the political and commercial activities of New England prompted the Board of Trade to overturn the Massachusetts Bay charter in 1684 and to consolidate Massachusetts, along with the other New England colonies and New York, into the Dominion of New England. After the colonists, aided by the turmoil of the Glorious Revolution of 1688 in England, succeeded in overthrowing the dominion scheme, the crown installed a royal governor in Massachusetts to protect its interests.
In those colonies with royal governors—the number of those colonies grew from one in 1650 to eight in 1760—the crown possessed a mechanism by which to ensure that royal policy was enforced. The Privy Council issued each royal governor in America a set of instructions carefully defining the limits of provincial authority. The royal governors were to have the power to decide when to call the provincial assemblies together, to prorogue, or dissolve, the assemblies, and to veto any legislation passed by those assemblies. The governor’s power over other aspects of the political structure of the colony was just as great. In most royal colonies he was the one official primarily responsible for the composition of the upper houses of the colonial legislatures and for the appointment of important provincial officials, such as the treasurer, attorney general, and all colonial judges. Moreover, the governor had enormous patronage powers over the local agencies of government. The officials of the county court, who were the principal agents of local government, were appointed by the governor in most of the royal colonies. Thus, the governor had direct or indirect control over every agency of government in America.
The growth of provincial power
The distance separating England and America, the powerful pressures exerted on royal officials by Americans, and the inevitable inefficiency of any large bureaucracy all served to weaken royal power and to strengthen the hold of provincial leaders on the affairs of their respective colonies. During the 18th century the colonial legislatures gained control over their own parliamentary prerogatives, achieved primary responsibility for legislation affecting taxation and defense, and ultimately took control over the salaries paid to royal officials. Provincial leaders also made significant inroads into the governor’s patronage powers. Although theoretically the governor continued to control the appointments of local officials, in reality he most often automatically followed the recommendations of the provincial leaders in the localities in question. Similarly, the governor’s councils, theoretically agents of royal authority, came to be dominated by prominent provincial leaders who tended to reflect the interests of the leadership of the lower house of assembly rather than those of the royal government in London.
Thus, by the mid-18th century most political power in America was concentrated in the hands of provincial rather than royal officials. These provincial leaders undoubtedly represented the interests of their constituents more faithfully than any royal official could, but it is clear that the politics of provincial America were hardly democratic by modern standards. In general, both social prestige and political power tended to be determined by economic standing, and the economic resources of colonial America, though not as unevenly distributed as in Europe, were nevertheless controlled by relatively few men.
In the Chesapeake Bay societies of Virginia and Maryland, and particularly in the regions east of the Blue Ridge mountains, a planter class came to dominate nearly every aspect of those colonies’ economic life. These same planters, joined by a few prominent merchants and lawyers, dominated the two most important agencies of local government—the county courts and the provincial assemblies. This extraordinary concentration of power in the hands of a wealthy few occurred in spite of the fact that a large percentage of the free adult male population (some have estimated as high as 80 to 90 percent) was able to participate in the political process. The ordinary citizens of the Chesapeake society, and those of most colonies, nevertheless continued to defer to those whom they considered to be their “betters.” Although the societal ethic that enabled power to be concentrated in the hands of a few was hardly a democratic one, there is little evidence, at least for Virginia and Maryland, that the people of those societies were dissatisfied with their rulers. In general, they believed that their local officials ruled responsively.
In the Carolinas a small group of rice and indigo planters monopolized much of the wealth. As in Virginia and Maryland, the planter class came to constitute a social elite. As a rule, the planter class of the Carolinas did not have the same long tradition of responsible government as did the ruling oligarchies of Virginia and Maryland, and, as a consequence, they tended to be absentee landlords and governors, often passing much of their time in Charleston, away from their plantations and their political responsibilities.
The western regions of both the Chesapeake and Carolina societies displayed distinctive characteristics of their own. Ruling traditions were fewer, accumulations of land and wealth less striking, and the social hierarchy less rigid in the west. In fact, in some western areas antagonism toward the restrictiveness of the east and toward eastern control of the political structure led to actual conflict. In both North and South Carolina armed risings of varying intensity erupted against the unresponsive nature of the eastern ruling elite. As the 18th century progressed, however, and as more men accumulated wealth and social prestige, the societies of the west came more closely to resemble those of the east.
New England society was more diverse and the political system less oligarchic than that of the South. In New England the mechanisms of town government served to broaden popular participation in government beyond the narrow base of the county courts.
The town meetings, which elected the members of the provincial assemblies, were open to nearly all free adult males. Despite this, a relatively small group of men dominated the provincial governments of New England. As in the South, men of high occupational status and social prestige were closely concentrated in leadership positions in their respective colonies; in New England, merchants, lawyers, and to a lesser extent clergymen made up the bulk of the social and political elite.
The social and political structure of the middle colonies was more diverse than that of any other region in America. New York, with its extensive system of manors and manor lords, often displayed genuinely feudal characteristics. The tenants on large manors often found it impossible to escape the influence of their manor lords. The administration of justice, the election of representatives, and the collection of taxes often took place on the manor itself. As a consequence, the large landowning families exercised an inordinate amount of economic and political power. The Great Rebellion of 1766, a short-lived outburst directed against the manor lords, was a symptom of the widespread discontent among the lower and middle classes. By contrast, Pennsylvania’s governmental system was more open and responsive than that of any other colony in America. A unicameral legislature, free from the restraints imposed by a powerful governor’s council, allowed Pennsylvania to be relatively independent of the influence of both the crown and the proprietor. This fact, in combination with the tolerant and relatively egalitarian bent of the early Quaker settlers and the subsequent immigration of large numbers of Europeans, made the social and political structure of Pennsylvania more democratic but more faction-ridden than that of any other colony.
The increasing political autonomy of the American colonies was a natural reflection of their increased stature in the overall scheme of the British Empire. In 1650 the population of the colonies had been about 52,000; in 1700 it was perhaps 250,000, and by 1760 it was approaching 1,700,000. Virginia had increased from about 54,000 in 1700 to approximately 340,000 in 1760. Pennsylvania had begun with about 500 settlers in 1681 and had attracted at least 250,000 people by 1760. And America’s cities were beginning to grow as well. By 1765 Boston had reached 15,000; New York City, 16,000–17,000; and Philadelphia, the largest city in the colonies, 20,000.
Part of that population growth was the result of the involuntary immigration of African slaves. During the 17th century, slaves remained a tiny minority of the population. By the mid-18th century, after Southern colonists discovered that the profits generated by their plantations could support the relatively large initial investments needed for slave labour, the volume of the slave trade increased markedly. In Virginia the slave population leaped from about 2,000 in 1670 to perhaps 23,000 in 1715 and reached 150,000 on the eve of the American Revolution. In South Carolina it was even more dramatic. In 1700 there were probably no more than 2,500 blacks in the population; by 1765 there were 80,000–90,000, with blacks outnumbering whites by about 2 to 1.
One of the principal attractions for the immigrants who moved to America voluntarily was the availability of inexpensive arable land. The westward migration to America’s frontier—in the early 17th century all of America was a frontier, and by the 18th century the frontier ranged anywhere from 10 to 200 miles (15 to 320 km) from the coastline—was to become one of the distinctive elements in American history. English Puritans, beginning in 1629 and continuing through 1640, were the first to immigrate in large numbers to America. Throughout the 17th century most of the immigrants were English; but, beginning in the second decade of the 18th century, a wave of Germans, principally from the Rhineland Palatinate, arrived in America: by 1770 between 225,000 and 250,000 Germans had immigrated to America, more than 70 percent of them settling in the middle colonies, where generous land policies and religious toleration made life more comfortable for them. The Scotch-Irish and Irish immigration, which began on a large scale after 1713 and continued past the American Revolution, was more evenly distributed. By 1750 both Scotch-Irish and Irish could be found in the western portions of nearly every colony. In almost all the regions in which Europeans sought greater economic opportunity, however, that same quest for independence and self-sufficiency led to tragic conflict with Indians over the control of land. And in nearly every instance the outcome was similar: the Europeans, failing to respect Indian claims either to land or to cultural autonomy, pushed the Indians of North America farther and farther into the periphery.
Provincial America came to be less dependent upon subsistence agriculture and more on the cultivation and manufacture of products for the world market. Land, which initially served only individual needs, came to be the fundamental source of economic enterprise. The independent yeoman farmer continued to exist, particularly in New England and the middle colonies, but most settled land in North America by 1750 was devoted to the cultivation of a cash crop. New England turned its land over to the raising of meat products for export. The middle colonies were the principal producers of grains. By 1700 Philadelphia exported more than 350,000 bushels of wheat and more than 18,000 tons of flour annually. The Southern colonies were, of course, even more closely tied to the cash crop system. South Carolina, aided by British incentives, turned to the production of rice and indigo. North Carolina, although less oriented toward the market economy than South Carolina, was nevertheless one of the principal suppliers of naval stores. Virginia and Maryland steadily increased their economic dependence on tobacco and on the London merchants who purchased that tobacco, and for the most part they ignored those who recommended that they diversify their economies by turning part of their land over to the cultivation of wheat. Their near-total dependence upon the world tobacco price would ultimately prove disastrous, but for most of the 18th century Virginia and Maryland soil remained productive enough to make a single-crop system reasonably profitable.
As America evolved from subsistence to commercial agriculture, an influential commercial class increased its power in nearly every colony. Boston was the centre of the merchant elite of New England, who not only dominated economic life but also wielded social and political power as well. Merchants such as James De Lancey and Philip Livingston in New York and Joseph Galloway, Robert Morris, and Thomas Wharton in Philadelphia exerted an influence far beyond the confines of their occupations. In Charleston the Pinckney, Rutledge, and Lowndes families controlled much of the trade that passed through that port. Even in Virginia, where a strong merchant class was nonexistent, those people with the most economic and political power were those commercial farmers who best combined the occupations of merchant and farmer. And it is clear that the commercial importance of the colonies was increasing. During the years 1700–10, approximately £265,000 sterling was exported annually to Great Britain from the colonies, with roughly the same amount being imported by the Americans from Great Britain. By the decade 1760–70, that figure had risen to more than £1,000,000 sterling of goods exported annually to Great Britain and £1,760,000 annually imported from Great Britain.
Land, labour, and independence
Although Frederick Jackson Turner’s 1893 “frontier thesis”—that American democracy was the result of an abundance of free land—has long been seriously challenged and modified, it is clear that the plentifulness of virgin acres and the lack of workers to till them did cause a loosening of the constraints of authority in the colonial and early national periods. Once it became clear that the easiest path to success for Britain’s New World “plantations” lay in raising export crops, there was a constant demand for agricultural labour, which in turn spurred practices that—with the notable exception of slavery—compromised a strictly hierarchical social order.
In all the colonies, whether governed directly by the king, by proprietors, or by chartered corporations, it was essential to attract settlers, and what governors had most plentifully to offer was land. Sometimes large grants were made to entire religious communities numbering in the hundreds or more. Sometimes tracts were allotted to wealthy men on the “head rights” (literally “per capita”) system of so many acres for each family member they brought over. Few Englishmen or Europeans had the means to buy farms outright, so the simple sale of homesteads by large-scale grantees was less common than renting. But there was another well-traveled road to individual proprietorship that also provided a workforce: the system of contract labour known as indentured service. Under it, an impecunious new arrival would sign on with a landowner for a period of service—commonly seven years—binding him to work in return for subsistence and sometimes for the repayment of his passage money to the ship captain who had taken him across the Atlantic (such immigrants were called “redemptioners”). At the end of this term, the indentured servant would in many cases be rewarded by the colony itself with “freedom dues,” a title to 50 or more acres of land in a yet-unsettled area. This somewhat biblically inspired precapitalist system of transfer was not unlike apprenticeship, the economic and social tool that added to the supply of skilled labour. The apprentice system called for a prepubescent boy to be “bound out” to a craftsman who would take him into his own home and there teach him his art while serving as a surrogate parent. (Girls were perennially “apprenticed” to their mothers as homemakers.) Both indentured servants and apprentices were subject to the discipline of the master, and their lot varied with his generosity or hard-fistedness. There must have been plenty of the latter type of master, as running away was common. The first Africans taken to Virginia, or at least some of them, appear to have worked as indentured servants. Not until the case of John Punch in the 1640s did it become legally established that black “servants” were to remain such for life. Having escaped, been caught, and brought to trial, Punch, an indentured servant of African descent, and two other indentured servants of European descent received very different sentences, with Punch’s punishment being servitude for the “rest of his natural life” while that for the other two was merely an extension of their service.
The harshness of New England’s climate and topography meant that for most of its people the road to economic independence lay in trade, seafaring, fishing, or craftsmanship. But the craving for an individually owned subsistence farm grew stronger as the first generations of religious settlers who had “planted” by congregation died off. In the process the communal holding of land by townships—with small allotted family garden plots and common grazing and orchard lands, much in the style of medieval communities—yielded gradually to the more conventional privately owned fenced farm. The invitation that available land offered—individual control of one’s life—was irresistible. Property in land also conferred civic privileges, so an unusually large number of male colonists were qualified for suffrage by the Revolution’s eve, even though not all of them exercised the vote freely or without traditional deference to the elite.
Slavery was the backbone of large-scale cultivation of such crops as tobacco and hence took strongest root in the Southern colonies. But thousands of white freeholders of small acreages also lived in those colonies; moreover, slavery on a small scale (mainly in domestic service and unskilled labour) was implanted in the North. The line between a free and a slaveholding America had not yet been sharply drawn.
One truly destabilizing system of acquiring land was simply “squatting.” On the western fringes of settlement, it was not possible for colonial administrators to use police powers to expel those who helped themselves to acres technically owned by proprietors in the seaboard counties. Far from seeing themselves as outlaws, the squatters believed that they were doing civilization’s work in putting new land into production, and they saw themselves as the moral superiors of eastern “owners” for whom land was a mere speculative commodity that they did not, with great danger and hardship, cultivate themselves. Squatting became a regular feature of westward expansion throughout early U.S. history. |
Byzantine music (Greek: Βυζαντινή μουσική) is the music of the Byzantine Empire. Originally it consisted of songs and hymns composed to Greek texts used for courtly ceremonials, during festivals, or as paraliturgical and liturgical music. The ecclesiastical forms of Byzantine music are the best known forms today, because different Orthodox traditions still identify with the heritage of Byzantine music, when their cantors sing monodic chant out of the traditional chant books such as the Sticherarion, which in fact consisted of five books, and the Irmologion.
Byzantine music did not disappear after the fall of Constantinople. Its traditions continued under the Patriarch of Constantinople, who after the Ottoman conquest in 1453 was granted administrative responsibilities over all Eastern Orthodox Christians in the Ottoman Empire. During the decline of the Ottoman Empire in the 19th century, burgeoning splinter nations in the Balkans declared autonomy or autocephaly from the Patriarchate of Constantinople. The new self-declared patriarchates were independent nations defined by their religion.
In this context, Christian religious chant practiced in the Ottoman Empire, in, among other nations, Bulgaria, Serbia and Greece, was based on the historical roots of the art tracing back to the Byzantine Empire, while the music of the Patriarchate created during the Ottoman period was often regarded as "post-Byzantine". This explains why Byzantine music refers to several Orthodox Christian chant traditions of the Mediterranean and of the Caucasus practiced in recent history and even today, and this article cannot be limited to the music culture of the Byzantine past.
The Byzantine chant was added by UNESCO in 2019 to its list of Intangible Cultural Heritage "as a living art that has existed for more than 2000 years, the Byzantine chant is a significant cultural tradition and comprehensive music system forming part of the common musical traditions that developed in the Byzantine Empire".
See also: List of Byzantine composers
The tradition of eastern liturgical chant, encompassing the Greek-speaking world, developed even before the establishment of the new Roman capital, Constantinople, in 330 until its fall in 1453. Byzantine music was influenced by Hellenistic music traditions, classic Greek music as well as religious music traditions of Syriac and Hebrew cultures. The Byzantine system of octoechos, in which melodies were classified into eight modes, is specifically thought to have been exported from Syria, where it was known in the 6th century, before its legendary creation by John of Damascus. It was imitated by musicians of the 7th century to create Arab music as a synthesis of Byzantine and Persian music, and these exchanges were continued through the Ottoman Empire until Istanbul today.
The term Byzantine music is sometimes associated with the medieval sacred chant of Christian Churches following the Constantinopolitan Rite. There is also an identification of "Byzantine music" with "Eastern Christian liturgical chant", which is due to certain monastic reforms, such as the Octoechos reform of the Quinisext Council (692) and the later reforms of the Stoudios Monastery under its abbots Sabas and Theodore. The triodion created during the reform of Theodore was also soon translated into Slavonic, which required also the adaption of melodic models to the prosody of the language. Later, after the Patriarchate and Court had returned to Constantinople in 1261, the former cathedral rite was not continued, but replaced by a mixed rite, which used the Byzantine Round notation to integrate the former notations of the former chant books (Papadike). This notation had developed within the book sticherarion created by the Stoudios Monastery, but it was used for the books of the cathedral rites written in a period after the fourth crusade, when the cathedral rite was already abandoned at Constantinople. It is being discussed that in the Narthex of the Hagia Sophia an organ was placed for use in secular processions of the Emperor's entourage.
According to the chant manual "Hagiopolites" of 16 church tones (echoi), the author of this treatise introduces a tonal system of 10 echoi. Nevertheless, both schools have in common a set of 4 octaves (protos, devteros, tritos, and tetartos), each of them had a kyrios echos (authentic mode) with the finalis on the degree V of the mode, and a plagios echos (plagal mode) with the final note on the degree I. According to Latin theory, the resulting eight tones (octoechos) had been identified with the seven modes (octave species) and tropes (tropoi which meant the transposition of these modes). The names of the tropes like "Dorian" etc. had been also used in Greek chant manuals, but the names Lydian and Phrygian for the octaves of devteros and tritos had been sometimes exchanged. The Ancient Greek harmonikai was a Hellenist reception of the Pythagorean education programme defined as mathemata ("exercises"). Harmonikai was one of them. Today, chanters of the Christian Orthodox churches identify with the heritage of Byzantine music whose earliest composers are remembered by name since the 5th century. Compositions had been related to them, but they must be reconstructed by notated sources which date centuries later. The melodic neume notation of Byzantine music developed late since the 10th century, with the exception of an earlier ekphonetic notation, interpunction signs used in lectionaries, but modal signatures for the eight echoi can already be found in fragments (papyri) of monastic hymn books (tropologia) dating back to the 6th century.
Amid the rise of Christian civilization within Hellenism, many concepts of knowledge and education survived during the imperial age, when Christianity became the official religion. The Pythagorean sect and music as part of the four "cyclical exercises" (ἐγκύκλια μαθήματα) that preceded the Latin quadrivium and science today based on mathematics, established mainly among Greeks in southern Italy (at Taranto and Crotone). Greek anachoretes of the early Middle Ages did still follow this education. The Calabrian Cassiodorus founded Vivarium where he translated Greek texts (science, theology and the Bible), and John of Damascus who learnt Greek from a Calabrian monk Kosmas, a slave in the household of his privileged father at Damascus, mentioned mathematics as part of the speculative philosophy.
Διαιρεῖται δὲ ἡ φιλοσοφία εἰς θεωρητικὸν καὶ πρακτικόν, τὸ θεωρητικὸν εἰς θεολογικόν, φυσικόν, μαθηματικόν, τὸ δὲ πρακτικὸν εἰς ἠθικόν, οἰκονομικόν, πολιτικόν.
According to him philosophy was divided into theory (theology, physics, mathematics) and practice (ethics, economy, politics), and the Pythagorean heritage was part of the former, while only the ethic effects of music were relevant in practice. The mathematic science harmonics was usually not mixed with the concrete topics of a chant manual.
Nevertheless, Byzantine music is modal and entirely dependent on the Ancient Greek concept of harmonics. Its tonal system is based on a synthesis with ancient Greek models, but we have no sources left that explain to us how this synthesis was done. Carolingian cantors could mix the science of harmonics with a discussion of church tones, named after the ethnic names of the octave species and their transposition tropes, because they invented their own octoechos on the basis of the Byzantine one. But they made no use of earlier Pythagorean concepts that had been fundamental for Byzantine music, including:
|Greek Reception||Latin Reception|
|the division of the tetrachord by three different intervals||the division by two different intervals (twice a tone and one half tone)|
|the temporary change of the genus (μεταβολὴ κατὰ γένος)||the official exclusion of the enharmonic and chromatic genus, although its use was rarely commented in a polemic way|
|the temporary change of the echos (μεταβολὴ κατὰ ἤχον)||a definitive classification according to one church tone|
|the temporary transposition (μεταβολὴ κατὰ τόνον)||absonia (Musica and Scolica enchiriadis, Berno of Reichenau, Frutolf of Michelsberg), although it was known since Boethius' wing diagramme|
|the temporary change of the tone system (μεταβολὴ κατὰ σύστημα)||no alternative tone system, except the explanation of absonia|
|the use of at least three tone systems (triphonia, tetraphonia, heptaphonia)||the use of the systema teleion (heptaphonia), relevance of Dasia system (tetraphonia) outside polyphony and of the triphonia mentioned in the Cassiodorus quotation (Aurelian) unclear|
|the microtonal attraction of mobile degrees (κινούμενοι) by fixed degrees (ἑστώτες) of the mode (echos) and its melos, not of the tone system||the use of dieses (attracted are E, a, and b natural within a half tone), since Boethius until Guido of Arezzo's concept of mi|
It is not evident by the sources, when exactly the position of the minor or half tone moved between the devteros and tritos. It seems that the fixed degrees (hestotes) became part of a new concept of the echos as melodic mode (not simply octave species), after the echoi had been called by the ethnic names of the tropes.
The 9th century Persian geographer Ibn Khurradadhbih (d. 911); in his lexicographical discussion of instruments cited the lyra (lūrā) as the typical instrument of the Byzantines along with the urghun (organ), shilyani (probably a type of harp or lyre) and the salandj (probably a bagpipe).
The first of these, the early bowed stringed instrument known as the Byzantine lyra, would come to be called the lira da braccio, in Venice, where it is considered by many to have been the predecessor of the contemporary violin, which later flourished there. The bowed "lyra" is still played in former Byzantine regions, where it is known as the Politiki lyra (lit. "lyra of the City" i.e. Constantinople) in Greece, the Calabrian lira in Southern Italy, and the Lijerica in Dalmatia.
The second instrument, the Hydraulis, originated in the Hellenistic world and was used in the Hippodrome in Constantinople during races. A pipe organ with "great leaden pipes" was sent by the emperor Constantine V to Pepin the Short King of the Franks in 757. Pepin's son Charlemagne requested a similar organ for his chapel in Aachen in 812, beginning its establishment in Western church music. Despite this, the Byzantines never used pipe organs and kept the flute-sounding Hydraulis until the Fourth Crusade.
The final Byzantine instrument, the aulos, was a double-reeded woodwind like the modern oboe or Armenian duduk. Other forms include the plagiaulos (πλαγίαυλος, from πλάγιος, plagios "sideways"), which resembled the flute, and the askaulos (ἀσκαυλός from ἀσκός askos "wine-skin"), a bagpipe. These bagpipes, also known as Dankiyo (from ancient Greek: To angeion (Τὸ ἀγγεῖον) "the container"), had been played even in Roman times. Dio Chrysostom wrote in the 1st century of a contemporary sovereign (possibly Nero) who could play a pipe (tibia, Roman reedpipes similar to Greek aulos) with his mouth as well as by tucking a bladder beneath his armpit. The bagpipes continued to be played throughout the empire's former realms down to the present. (See Balkan Gaida, Greek Tsampouna, Pontic Tulum, Cretan Askomandoura, Armenian Parkapzuk, Zurna and Romanian Cimpoi.)
Other commonly used instruments used in Byzantine Music include the Kanonaki, Oud, Laouto, Santouri, Toubeleki, Tambouras, Defi Tambourine, Çifteli (which was known as Tamburica in Byzantine times), Lyre, Kithara, Psaltery, Saz, Floghera, Pithkiavli, Kavali, Seistron, Epigonion (the ancestor of the Santouri), Varviton (the ancestor of the Oud and a variation of the Kithara), Crotala, Bowed Tambouras (similar to Byzantine Lyra), Šargija, Monochord, Sambuca, Rhoptron, Koudounia, perhaps the Lavta and other instruments used before the 4th Crusade that are no longer played today. These instruments are unknown at this time.
In 2021, archaeologists discovered a flute with six holes dated back to the 4th and 5th centuries AD, in the Zerzevan Castle.
Secular music existed and accompanied every aspect of life in the empire, including dramatic productions, pantomime, ballets, banquets, political and pagan festivals, Olympic games, and all ceremonies of the imperial court. It was, however, regarded with contempt, and was frequently denounced as profane and lascivious by some Church Fathers.
Another genre that lies between liturgical chant and court ceremonial are the so-called polychronia (πολυχρονία) and acclamations (ἀκτολογία). The acclamations were sung to announce the entrance of the Emperor during representative receptions at the court, into the hippodrome or into the cathedral. They can be different from the polychronia, ritual prayers or ektenies for present political rulers and are usually answered by a choir with formulas such as "Lord protect" (κύριε σῶσον) or "Lord have mercy on us/them" (κύριε ἐλέησον). The documented polychronia in books of the cathedral rite allow a geographical and a chronological classification of the manuscript and they are still used during ektenies of the divine liturgies of national Orthodox ceremonies today. The hippodrome was used for a traditional feast called Lupercalia (15 February), and on this occasion the following acclamation was celebrated:
|Claqueurs:||Lord, protect the Master of the Romans.||Οἱ κράκται·||Κύριε, σῶσον τοὺς δεσπότας τῶν Ῥωμαίων.|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||Lord, protect to whom they gave the crown.||Οἱ κράκται·||Κύριε, σῶσον τοὺς ἐκ σοῦ ἐστεμμένους.|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||Lord, protect the Orthodox power.||Οἱ κράκται·||Κύριε, σῶσον ὀρθόδοξον κράτος·|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||Lord, protect the renewal of the annual cycles.||Οἱ κράκται·||Κύριε, σῶσον τὴν ἀνακαίνησιν τῶν αἰτησίων.|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||Lord, protect the wealth of the subjects.||Οἱ κράκται·||Κύριε, σῶσον τὸν πλοῦτον τῶν ὑπηκόων·|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||May the Creator and Master of all things make long your years with the Augustae and the Porphyrogeniti.||Οἱ κράκται·||Ἀλλ᾽ ὁ πάντων Ποιητὴς καὶ Δεσπότης τοὺς χρόνους ὑμῶν πληθύνει σὺν ταῖς αὐγούσταις καὶ τοῖς πορφυρογεννήτοις.|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
|Claqueurs:||Listen, God, to your people.||Οἱ κράκται·||Εἰσακούσει ὁ Θεὸς τοῦ λαοῦ ἡμῶν·|
|The people:||Lord, protect (X3).||ὁ λαός ἐκ γ'·||Κύριε, σῶσον.|
The main source about court ceremonies is an incomplete compilation in a 10th-century manuscript that organised parts of a treatise Περὶ τῆς Βασιλείου Τάξεως ("On imperial ceremonies") ascribed to Emperor Constantine VII, but in fact compiled by different authors who contributed with additional ceremonies of their period. In its incomplete form chapter 1–37 of book I describe processions and ceremonies on religious festivals (many lesser ones, but especially great feasts such as the Elevation of the Cross, Christmas, Theophany, Palm Sunday, Good Friday, Easter and Ascension Day and feasts of saints including St Demetrius, St Basil etc. often extended over many days), while chapter 38–83 describe secular ceremonies or rites of passage such as coronations, weddings, births, funerals, or the celebration of war triumphs. For the celebration of Theophany the protocol begins to mention several stichera and their echoi (ch. 3) and who had to sing them:
Δοχὴ πρώτη, τῶν Βενέτων, φωνὴ ἢχ. πλαγ. δ`. « Σήμερον ὁ συντρίψας ἐν ὕδασι τὰς κεφαλὰς τῶν δρακόντων τὴν κεφαλὴν ὑποκλίνει τῷ προδρόμῳ φιλανθρώπως. » Δοχἠ β᾽, τῶν Πρασίνων, φωνὴ πλαγ. δ'· « Χριστὸς ἁγνίζει λουτρῷ ἁγίῳ τὴν ἐξ ἐθνῶν αὐτοῦ Ἐκκλησίαν. » Δοχὴ γ᾽, τῶν Βενέτων, φωνἠ ἤχ. πλαγ. α'· « Πυρὶ θεότητος ἐν Ἰορδάνῃ φλόγα σβεννύει τῆς ἁμαρτίας. »
These protocols gave rules for imperial progresses to and from certain churches at Constantinople and the imperial palace, with fixed stations and rules for ritual actions and acclamations from specified participants (the text of acclamations and processional troparia or kontakia, but also heirmoi are mentioned), among them also ministers, senate members, leaders of the "Blues" (Venetoi) and the "Greens" (Prasinoi)—chariot teams during the hippodrome's horse races. They had an important role during court ceremonies. The following chapters (84–95) are taken from a 6th-century manual by Peter the Patrician. They rather describe administrative ceremonies such as the appointment of certain functionaries (ch. 84,85), investitures of certain offices (86), the reception of ambassadors and the proclamation of the Western Emperor (87,88), the reception of Persian ambassadors (89,90), Anagorevseis of certain Emperors (91–96), the appointment of the senate's proedros (97). The "palace order" did not only prescribe the way of movements (symbolic or real) including on foot, mounted, by boat, but also the costumes of the celebrants and who has to perform certain acclamations. The emperor often plays the role of Christ and the imperial palace is chosen for religious rituals, so that the ceremonial book brings the sacred and the profane together. Book II seems to be less normative and was obviously not compiled from older sources like book I, which often mentioned outdated imperial offices and ceremonies, it rather describes particular ceremonies as they had been celebrated during particular imperial receptions during the Macedonian renaissance.
Two concepts must be understood to appreciate fully the function of music in Byzantine worship and they were related to a new form of urban monasticism, which even formed the representative cathedral rites of the imperial ages, which had to baptise many catechumens.
The first, which retained currency in Greek theological and mystical speculation until the dissolution of the empire, was the belief in the angelic transmission of sacred chant: the assumption that the early Church united men in the prayer of the angelic choirs. It was partly based on the Hebrew fundament of Christian worship, but in the particular reception of St. Basil of Caesarea's divine liturgy. John Chrysostom, since 397 Archbishop of Constantinople, abridged the long formula of Basil's divine liturgy for the local cathedral rite.
The notion of angelic chant is certainly older than the Apocalypse account (Revelation 4:8–11), for the musical function of angels as conceived in the Old Testament is brought out clearly by Isaiah (6:1–4) and Ezekiel (3:12). Most significant in the fact, outlined in Exodus 25, that the pattern for the earthly worship of Israel was derived from heaven. The allusion is perpetuated in the writings of the early Fathers, such as Clement of Rome, Justin Martyr, Ignatius of Antioch, Athenagoras of Athens, John Chrysostom and Pseudo-Dionysius the Areopagite. It receives acknowledgement later in the liturgical treatises of Nicolas Kavasilas and Symeon of Thessaloniki.
The second, less permanent, concept was that of koinonia or "communion". This was less permanent because, after the fourth century, when it was analyzed and integrated into a theological system, the bond and "oneness" that united the clergy and the faithful in liturgical worship was less potent. It is, however, one of the key ideas for understanding a number of realities for which we now have different names. With regard to musical performance, this concept of koinonia may be applied to the primitive use of the word choros. It referred, not to a separate group within the congregation entrusted with musical responsibilities, but to the congregation as a whole. St. Ignatius wrote to the Church in Ephesus in the following way:
You must every man of you join in a choir so that being harmonious and in concord and taking the keynote of God in unison, you may sing with one voice through Jesus Christ to the Father, so that He may hear you and through your good deeds recognize that you are parts of His Son.
A marked feature of liturgical ceremony was the active part taken by the people in its performance, particularly in the recitation or chanting of hymns, responses and psalms. The terms choros, koinonia and ekklesia were used synonymously in the early Byzantine Church. In Psalms 149 and 150, the Septuagint translated the Hebrew word machol (dance) by the Greek word choros Greek: χορός. As a result, the early Church borrowed this word from classical antiquity as a designation for the congregation, at worship and in song in heaven and on earth both.
Concerning the practice of psalm recitation, the recitation by a congregation of educated chanters is already testified by the soloistic recitation of abridged psalms by the end of the 4th century. Later it was called prokeimenon. Hence, there was an early practice of simple psalmody, which was used for the recitation of canticles and the psalter, and usually Byzantine psalters have the 15 canticles in an appendix, but the simple psalmody itself was not notated before the 13th century, in dialogue or papadikai treatises preceding the book sticheraria. Later books, like the akolouthiai and some psaltika, also contain the elaborated psalmody, when a protopsaltes recited just one or two psalm verses. Between the recited psalms and canticles troparia were recited according to the same more or less elaborated psalmody. This context relates antiphonal chant genres including antiphona (kind of introits), trisagion and its substitutes, prokeimenon, allelouiarion, the later cherubikon and its substitutes, the koinonikon cycles as they were created during the 9th century. In most of the cases they were simply troparia and their repetitions or segments were given by the antiphonon, whether it was sung or not, its three sections of the psalmodic recitation were separated by the troparion.
The fashion in all cathedral rites of the Mediterranean was a new emphasis on the psalter. In older ceremonies before Christianity became the religion of empires, the recitation of the biblical odes (mainly taken from the Old Testament) was much more important. They did not disappear in certain cathedral rites such as the Milanese and the Constantinopolitan rite.
Before long, however, a clericalizing tendency soon began to manifest itself in linguistic usage, particularly after the Council of Laodicea, whose fifteenth Canon permitted only the canonical psaltai, "chanters:", to sing at the services. The word choros came to refer to the special priestly function in the liturgy – just as, architecturally speaking, the choir became a reserved area near the sanctuary—and choros eventually became the equivalent of the word kleros (the pulpits of two or even five choirs).
The nine canticles or odes according to the psalter were:
and in Constantinople they were combined in pairs against this canonical order:
The common term for a short hymn of one stanza, or one of a series of stanzas, is troparion. As a refrain interpolated between psalm verses it had the same function as the antiphon in Western plainchant. The simplest troparion was probably "allelouia", and similar to troparia like the trisagion or the cherubikon or the koinonika a lot of troparia became a chant genre of their own.
A famous example, whose existence is attested as early as the 4th century, is the Easter Vespers hymn, Phos Hilaron ("O Resplendent Light"). Perhaps the earliest set of troparia of known authorship are those of the monk Auxentios (first half of the 5th century), attested in his biography but not preserved in any later Byzantine order of service. Another, O Monogenes Yios ("Only Begotten Son"), ascribed to the emperor Justinian I (527–565), followed the doxology of the second antiphonon at the beginning of the Divine Liturgy.
The development of large scale hymnographic forms begins in the fifth century with the rise of the kontakion, a long and elaborate metrical sermon, reputedly of Syriac origin, which finds its acme in the work of St. Romanos the Melodist (6th century). This dramatic homily which could treat various subjects, theological and hagiographical ones as well as imperial propaganda, comprises some 20 to 30 stanzas (oikoi "houses") and was sung in a rather simple style with emphasise on the understanding of the recent texts. The earliest notated versions in Slavic kondakar's (12th century) and Greek kontakaria-psaltika (13th century), however, are in a more elaborate style (also rubrified idiomela), and were probably sung since the ninth century, when kontakia were reduced to the prooimion (introductory verse) and first oikos (stanza). Romanos' own recitation of all the numerous oikoi must have been much simpler, but the most interesting question of the genre are the different functions that kontakia once had. Romanos' original melodies were not delivered by notated sources dating back to the 6th century, the earliest notated source is the Tipografsky Ustav written about 1100. Its gestic notation was different from Middle Byzantine notation used in Italian and Athonite Kontakaria of the 13th century, where the gestic signs (cheironomiai) became integrated as "great signs". During the period of psaltic art (14th and 15th centuries), the interest of kalophonic elaboration was focussed on one particular kontakion which was still celebrated: the Akathist hymn. An exception was John Kladas who contributed also with kalophonic settings of other kontakia of the repertoire.
Some of them had a clear liturgical assignation, others not, so that they can only be understood from the background of the later book of ceremonies. Some of Romanos creations can be even regarded as political propaganda in connection with the new and very fast reconstruction of the famous Hagia Sophia by Isidore of Miletus and Anthemius of Tralles. A quarter of Constantinople had been burnt down during a civil war. Justinian had ordered a massacre at the hippodrome, because his imperial antagonists who were affiliated to the former dynasty, had been organised as a chariot team. Thus, he had place for the creation of a huge park with a new cathedral in it, which was larger than any church built before as Hagia Sophia. He needed a kind of mass propaganda to justify the imperial violence against the public. In the kontakion "On earthquakes and conflagration" (H. 54), Romanos interpreted the Nika riot as a divine punishment, which followed in 532 earlier ones including earthquakes (526–529) and a famine (530):
|The city was buried beneath these horrors and cried in great sorrow.||Ὑπὸ μὲν τούτων τῶν δεινῶν κατείχετο ἡ πόλις καὶ θρῆνον εἶχε μέγα·|
|Those who feared God stretched their hands out to him,||Θεὸν οἱ δεδιότες χεῖρας ἐξέτεινον αὐτῷ|
|begging for compassion and an end to the terror.||ἐλεημοσύνην ἐξαιτοῦντες παρ᾽ αὐτοῦ καὶ τῶν κακῶν κατάπαυσιν·|
|Reasonably, the emperor—and his empress—were in these ranks,||σὺν τούτοις δὲ εἰκότως ἐπηύχετο καὶ ὁ βασιλεύων|
|their eyes lifted in hope toward the Creator:||ἀναβλέψας πρὸς τὸν πλάστην —σὺν τούτῳ δὲ σύνευνος ἡ τούτου—|
|"Grant me victory", he said, "just as you made David||Δός μοι, βοῶν, σωτήρ, ὡς καὶ τῷ Δαυίδ σου|
|victorious over Goliath. You are my hope.||τοῦ νικῆσαι Γολιάθ· σοὶ γὰρ ἐλπίζω·|
|Rescue, in your mercy, your loyal people||σῶσον τὸν πιστὸν λαόν σου ὡς ἐλεήμων,|
|and grant them eternal life."||οἶσπερ καὶ δώσῃς ζωὴν τὴν αἰώνιον.(H. 54.18)|
According to Johannes Koder the kontakion was celebrated the first time during Lenten period in 537, about ten months before the official inauguration of the new built Hagia Sophia on 27 December.
Main article: Cherubikon
During the second half of the sixth century, there was a change in Byzantine sacred architecture, because the altar used for the preparation of the eucharist had been removed from the bema. It was placed in a separated room called "prothesis" (πρόθεσις). The separation of the prothesis where the bread was consecrated during a separated service called proskomide, required a procession of the gifts at the beginning of the second eucharist part of the divine liturgy. The troparion "Οἱ τὰ χερουβεὶμ", which was sung during the procession, was often ascribed to Emperor Justin II, but the changes in sacred architecture were definitely traced back to his time by archaeologists. Concerning the Hagia Sophia, which was constructed earlier, the procession was obviously within the church. It seems that the cherubikon was a prototype of the Western chant genre offertory.
With this change came also the dramaturgy of the three doors in a choir screen before the bema (sanctuary). They were closed and opened during the ceremony. Outside Constantinople these choir or icon screens of marble were later replaced by iconostaseis. Antonin, a Russian monk and pilgrim of Novgorod, described the procession of choirs during Orthros and the divine liturgy, when he visited Constantinople in December 1200:
When they sing Lauds at Hagia Sophia, they sing first in the narthex before the royal doors; then they enter to sing in the middle of the church; then the gates of Paradise are opened and they sing a third time before the altar. On Sundays and feastdays the Patriarch assists at Lauds and at the Liturgy; at this time he blesses the singers from gallery, and ceasing to sing, they proclaim the polychronia; then they begin to sing again as harmoniously and as sweetly as the angels, and they sing in this fashion until the Liturgy. After Lauds they put off their vestments and go out to receive the blessing of the Patriarch; then the preliminary lessons are read in the ambo; when these are over the Liturgy begins, and at the end of the service the chief priest recites the so-called prayer of the ambo within the sanctuary while the second priest recites in the church, beyond the ambo; when they have finished the prayer, both bless the people. Vespers are said in the same fashion, beginning at an early hour.
By the end of the seventh century with the reform of 692, the kontakion, Romanos' genre was overshadowed by a certain monastic type of homiletic hymn, the canon and its prominent role it played within the cathedral rite of the Patriarchate of Jerusalem. Essentially, the canon, as it is known since 8th century, is a hymnodic complex composed of nine odes that were originally related, at least in content, to the nine Biblical canticles and to which they were related by means of corresponding poetic allusion or textual quotation (see the section about the biblical odes). Out of the custom of canticle recitation, monastic reformers at Constantinople, Jerusalem and Mount Sinai developed a new homiletic genre whose verses in the complex ode meter were composed over a melodic model: the heirmos.
During the 7th century kanons at the Patriarchate of Jerusalem still consisted of the two or three odes throughout the year cycle, and often combined different echoi. The form common today of nine or eight odes was introduced by composers within the school of Andrew of Crete at Mar Saba. The nine odes of the kanon were dissimilar by their metrum. Consequently, an entire heirmos comprises nine independent melodies (eight, because the second ode was often omitted outside Lenten period), which are united musically by the same echos and its melos, and sometimes even textually by references to the general theme of the liturgical occasion—especially in acrosticha composed over a given heirmos, but dedicated to a particular day of the menaion. Until the 11th century, the common book of hymns was the tropologion and it had no other musical notation than a modal signature and combined different hymn genres like troparion, sticheron, and canon.
The earliest tropologion was already composed by Severus of Antioch, Paul of Edessa and Ioannes Psaltes at the Patriarchate of Antioch between 512 and 518. Their tropologion has only survived in Syriac translation and revised by Jacob of Edessa. The tropologion was continued by Sophronius, Patriarch of Jerusalem, but especially by Andrew of Crete's contemporary Germanus I, Patriarch of Constantinople who represented as a gifted hymnographer not only an own school, but he became also very eager to realise the purpose of this reform since 705, although its authority was questioned by iconoclast antagonists and only established in 787. After the octoechos reform of the Quinisext Council in 692, monks at Mar Saba continued the hymn project under Andrew's instruction, especially by his most gifted followers John of Damascus and Cosmas of Jerusalem. These various layers of the Hagiopolitan tropologion since the 5th century have mainly survived in a Georgian type of tropologion called "Iadgari" whose oldest copies can be dated back to the 9th century.
Today the second ode is usually omitted (while the great kanon attributed to John of Damascus includes it), but medieval heirmologia rather testify the custom, that the extremely strict spirit of Moses' last prayer was especially recited during Lenten tide, when the number of odes was limited to three odes (triodion), especially patriarch Germanus I contributed with many own compositions of the second ode. According to Alexandra Nikiforova only two of 64 canons composed by Germanus I are present in the current print editions, but manuscripts have transmitted his hymnographic heritage.
During the 9th-century reforms of the Stoudios Monastery, the reformers favoured Hagiopolitan composers and customs in their new notated chant books heirmologion and sticherarion, but they also added substantial parts to the tropologion and re-organised the cycle of movable and immovable feasts (especially Lent, the triodion, and its scriptural lessons). The trend is testified by a 9th-century tropologion of the Saint Catherine's Monastery which is dominated by contributions of Jerusalem. Festal stichera, accompanying both the fixed psalms at the beginning and end of Hesperinos and the psalmody of the Orthros (the Ainoi) in the Morning Office, exist for all special days of the year, the Sundays and weekdays of Lent, and for the recurrent cycle of eight weeks in the order of the modes beginning with Easter. Their melodies were originally preserved in the tropologion. During the 10th century two new notated chant books were created at the Stoudios Monastery, which were supposed to replace the tropologion:
These books were not only provided with musical notation, with respect to the former tropologia they were also considerably more elaborated and varied as a collection of various local traditions. In practice it meant that only a small part of the repertory was really chosen to be sung during the divine services. Nevertheless, the form tropologion was used until the 12th century, and many later books which combined octoechos, sticherarion and heirmologion, rather derive from it (especially the usually unnotated Slavonic osmoglasnik which was often divided in two parts called "pettoglasnik", one for the kyrioi, another for the plagioi echoi).
The old custom can be studied on the basis of the 9th-century tropologion ΜΓ 56+5 from Sinai which was still organised according to the old tropologion beginning with the Christmas and Epiphany cycle (not with 1 September) and without any separation of the movable cycle. The new Studite or post-Studite custom established by the reformers was that each ode consists of an initial troparion, the heirmos, followed by three, four or more troparia from the menaion, which are the exact metrical reproductions of the heirmos (akrostics), thereby allowing the same music to fit all troparia equally well. The combination of Constantinopolitan and Palestine customs must be also understood on the base of the political history.
Especially the first generation around Theodore Studites and Joseph the Confessor, and the second around Joseph the Hymnographer suffered from the first and the second crisis of iconoclasm. The community around Theodore could revive monastic life at the abandoned Stoudios Monastery, but he had to leave Constantinople frequently in order to escape political persecution. During this period, the Patriarchates of Jerusalem and Alexandria (especially Sinai) remained centres of the hymnographic reform. Concerning the Old Byzantine notation, Constantinople and the area between Jerusalem and Sinai can be clearly distinguished. The earliest notation used for the books sticherarion and was theta notation, but it was soon replaced by palimpsests with more detailed forms between Coislin (Palestine) and Chartres notation (Constantinople). Although it was correct that the Studites in Constantinople established a new mixed rite, its customs remained different from those of the other Patriarchates which were located outside the Empire.
On the other hand, Constantinople as well as other parts of the Empire like Italy encouraged also privileged women to found female monastic communities and certain hegumeniai also contributed to the hymnographic reform. The basic repertoire of the newly created cycles the immovable menaion, the movable triodion and pentekostarion and the week cycle of parakletike and Orthros cycle of the eleven stichera heothina and their lessons are the result of a redaction of the tropologion which started with the generation of Theodore the Studite and ended during the Macedonian Renaissance under the emperors Leo VI (the stichera heothina are traditionally ascribed to him) and Constantine VII (the exaposteilaria anastasima are ascribed to him).
Another project of the Studites' reform was the organisation of the New Testament (Epistle, Gospel) reading cycles, especially its hymns during the period of the triodion (between the pre-Lenten Meatfare Sunday called "Apokreo" and the Holy Week). Older lectionaries had been often completed by the addition of ekphonetic notation and of reading marks which indicate the readers where to start (ἀρχή) and to finish (τέλος) on a certain day. The Studites also created a typikon—a monastic one which regulated the cœnobitic life of the Stoudios Monastery and granted its autonomy in resistance against iconoclast Emperors, but they had also an ambitious liturgical programme. They imported Hagiopolitan customs (of Jerusalem) like the Great Vesper, especially for the movable cycle between Lent and All Saints (triodion and pentekostarion), including a Sunday of Orthodoxy which celebrated the triumph over iconoclasm on the first Sunday of Lent.
Unlike the current Orthodox custom Old Testament readings were particular important during Orthros and Hesperinos in Constantinople since the 5th century, while there was no one during the divine liturgy. The Great Vespers according to Studite and post-Studite custom (reserved for just a few feasts like the Sunday of Orthodoxy) were quite ambitious. The evening psalm 140 (kekragarion) was based on simple psalmody, but followed by florid coda of a soloist (monophonaris). A melismatic prokeimenon was sung by him from the ambo, it was followed by three antiphons (Ps 114–116) sung by the choirs, the third used the trisagion or the usual anti-trisagion as a refrain, and an Old Testament reading concluded the prokeimenon.
Main article: Hagiopolitan Octoechos
The earliest chant manual pretends right at the beginning that John of Damascus was its author. Its first edition was based on a more or less complete version in a 14th-century manuscript, but the treatise was probably created centuries earlier as part of the reform redaction of the tropologia by the end of the 8th century, after Irene's Council of Nikaia had confirmed the octoechos reform of 692 in 787. It fits well to the later focus on Palestine authors in the new chant book heirmologion.
Concerning the octoechos, the Hagiopolitan system is characterised as a system of eight diatonic echoi with two additional phthorai (nenano and nana) which were used by John of Damascus and Cosmas, but not by Joseph the Confessor who obviously preferred the diatonic mele of plagios devteros and plagios tetartos.
It also mentions an alternative system of the Asma (the cathedral rite was called ἀκολουθία ᾀσματική) that consisted of 4 kyrioi echoi, 4 plagioi, 4 mesoi, and 4 phthorai. It seems that until the time, when the Hagiopolites was written, the octoechos reform did not work out for the cathedral rite, because singers at the court and at the Patriarchate still used a tonal system of 16 echoi, which was obviously part of the particular notation of their books: the asmatikon and the kontakarion or psaltikon.
But neither any 9th-century Constantinopolitan chant book nor an introducing treatise that explains the fore-mentioned system of the Asma, have survived. Only a 14th-century manuscript of Kastoria testifies cheironomic signs used in these books, which are transcribed in longer melodic phrases by the notation of the contemporary sticherarion, the middle Byzantine Round notation.
The former genre and glory of Romanos' kontakion was not abandoned by the reformers, even contemporary poets in a monastic context continued to compose new liturgical kontakia (mainly for the menaion), it likely preserved a modality different from Hagiopolitan oktoechos hymnography of the sticherarion and the heirmologion.
But only a limited number of melodies or kontakion mele had survived. Some of them were rarely used to compose new kontakia, other kontakia which became the model for eight prosomoia called "kontakia anastasima" according to the oktoechos, had been frequently used. The kontakion ὁ ὑψωθεῖς ἐν τῷ σταυρῷ for the feast of cross exaltation (14 September) was not the one chosen for the prosomoion of the kontakion anastasimon in the same echos, it was actually the kontakion ἐπεφάνης σήμερον for Theophany (6 January). But nevertheless, it represented the second important melos of the echos tetartos which was frequently chosen to compose new kontakia, either for the prooimion (introduction) or for the oikoi (the stanzas of the kontakion called "houses"). Usually these models were not rubrified as "avtomela", but as idiomela which means that the modal structure of a kontakion was more complex, similar to a sticheron idiomelon changing through different echoi.
This new monastic type of kontakarion can be found in the collection of Saint Catherine's Monastery on the peninsula of Sinai (ET-MSsc Ms. Gr. 925–927) and its kontakia had only a reduced number of oikoi. The earliest kontakarion (ET-MSsc Ms. Gr. 925) dating to the 10th century might serve as an example. The manuscript was rubrified Κονδακάριον σῦν Θεῷ by the scribe, the rest is not easy to decipher since the first page was exposed to all kinds of abrasion, but it is obvious that this book is a collection of short kontakia organised according to the new menaion cycle like a sticherarion, beginning with 1 September and the feast of Symeon the Stylite. It has no notation, instead the date is indicated and the genre κονδάκιον is followed by the dedicated Saint and the incipit of the model kontakion (not even with an indication of its echos by a modal signature in this case).
Folio 2 verso shows a kontakion ἐν ἱερεῦσιν εὐσεβῶς διαπρέψας which was composed over the prooimion used for the kontakion for cross exaltation ὁ ὑψωθεῖς ἐν τῷ σταυρῷ. The prooimion is followed by three stanzas called oikoi, but they all share with the prooimion the same refrain called "ephymnion" (ἐφύμνιον) ταὶς σαῖς πρεσβεῖαις which concludes each oikos. But the model for these oikoi was not taken from the same kontakion, but from the other kontakion for Theophany whose first oikos had the incipit τῇ γαλιλαίᾳ τῶν ἐθνῶν.
The Slavic reception is crucial for the understanding, how the kontakion has changed under the influence of the Stoudites. During the 9th and 10th centuries new Empires established in the North which were dominated by Slavic populations - Great Moravia and the Kievan Rus' (a federation of East Slavic tribes ruled by Varangians between the Black Sea and Scandinavia). The Byzantines had plans to participate actively in the Christianization of those new Slavic powers, but those intentions failed. The well established and recently Christianized (864) Bulgarian Empire created two new literary centres at Preslav and Ohrid. These empires requested a state religion, legal codexes, the translation of canonic scriptures, but also the translation of an overregional liturgy as it was created by the Stoudios Monastery, Mar Saba and Saint Catherine's Monastery. The Slavic reception confirmed this new trend, but also showed a detailed interest for the cathedral rite of the Hagia Sophia and the pre-Stoudite organisation of the tropologion. Thus, these manuscripts are not only the earliest literary evidence of Slavonic languages which offer a transcription of the local variants of Slavonic languages, but also the earliest sources of the Constantinopolitan cathedral rite with musical notation, although transcribed into a notation of its own, just based on one tone system and on the contemporary layer of 11th-century notation, the roughly diastematic Old Byzantine notation.
Unfortunately, no Slavonic tropologion written in Glagolitic script by Cyril and Methodius has survived. This lack of evidence does not prove that it had not existed, since certain conflicts with Benedictines and other Slavonic missionaries in Great Moravia and Pannonia were obviously about an Orthodox rite translated into Old Church Slavonic and practised already by Methodius and Clement of Ohrid. Only few early Glagolitic sources have been left. The Kiev Missal proves a West Roman influence in the Old Slavonic liturgy for certain territories of Croatia. A later 11th-century New Testament lectionary known as the Codex Assemanius was created by the Ohrid Literary School. A euchologion (ET-MSsc Ms. Slav. 37) was in part compiled for Great Moravia by Cyril, Clement, Naum and Constantine of Preslav. It was probably copied at Preslav about the same time. The aprakos lectionary proves that the Stoudites typikon was obeyed concerning the organisation of reading cycles. It explains, why Svetlana Kujumdžieva assumed that the "church order" mentioned in Methodius' vita meant the mixed Constantinopolitan Sabbaite rite established by the Stoudites. But a later finding by the same author pointed to another direction. In a recent publication she chose "Iliya's book" (RUS-Mda Fond 381, Ms. 131) as the earliest example of an Old Church Slavonic tropologion (around 1100), it has compositions by Cyril of Jerusalem and agrees about 50% with the earliest tropologion of Sinai (ET-MSsc Ms. NE/MΓ 56+5) and it is likewise organised as a mеnaion (beginning with September like the Stoudites), but it still includes the movable cycle. Hence, its organisation is still close to the tropologion and it has compositions not only ascribed to Cosmas and John, but also Stephen the Sabaite, Theophanes the Branded, the Georgian scribe and hymnographer Basil at Mar Saba and Joseph the Hymnographer. Further on, musical notation has been added on some pages which reveal an exchange between Slavic literary schools and scribes of Sinai or Mar Saba:
Kujumdžieva pointed later at a Southern Slavic origin (also based on linguistic arguments since 2015), although feasts of local saints, celebrated on the same day as Christina Boris and Gleb, had been added. If its reception of a pre-Stoudite tropologion was of Southern Slavic origin, there is evidence that this manuscript was copied and adapted for a use in Northern Slavic territories. The adaption to the menaion of the Rus rather proves that notation was only used in a few parts, where a new translation of a certain text required a new melodic composition which was no longer included within the existing system of melodies established by the Stoudites and their followers. But there is a coincidence between the early fragment from the Berlin-collection, where the ἀλλὸ rubric is followed by a modal signature and some early neumes, while the elaborated zamennaya is used for a new sticheron (ино) dedicated to Saint Christina.
Recent systematic editions of the 12th-century notated mineya (like RUS-Mim Ms. Sin. 162 with just about 300 folios for the month December) which included not just samoglasni (idiomela) even podobni (prosomoia) and akrosticha with notation (while the kondaks were left without notation), have revealed that the philosophy of the literary schools in Ohrid and Preslav did only require in exceptional cases the use of notation. The reason is that their translation of Greek hymnography were not very literal, but often quite far from the content of the original texts, the main concern of this school was the recomposition or troping of the given system of melodies (with their models known as avtomela and heirmoi) which was left intact. The Novgorod project of re-translation during the 12th century tried to come closer to the meaning of the texts and the notation was needed to control the changes within the system of melodies.
Concerning the Slavic rite celebrated in various parts of the Kievan Rus', there was not only an interest for the organisation of monastic chant and the tropologion and the oktoich or osmoglasnik which included chant of the irmolog, podobni (prosomoia) and their models (samopodobni), but also the samoglasni (idiomela) like in case of Iliya's book.
Since the 12th century, there are also Slavic stichirars which did not only include the samoglasni, but also the podobni provided with znamennaya notation. A comparison of the very first samoglasen наста въходъ лѣтоу ("Enter the entrance of the annual cycle") in glas 1 (ἐπέστη ἡ εἴσοδος τoῦ ἐνιαυτοῦ echos protos, SAV 1) of the mineya shows, that the znamennaya version is much closer to fita (theta) notation, since the letter "θ =" corresponds to other signs in Coislin and a synthetic way to write a kratema group in Middle Byzantine notation. It was obviously an elaboration of the simpler version written in Coislin:
The Middle Byzantine version allows to recognise the exact steps (intervals) between the neumes. They are here described according to the Papadic practice of solfège called "parallage" (παραλλαγή) which is based on echemata: for ascending steps always kyrioi echoi are indicated, for descending steps always echemata of the plagioi echoi. If the phonic steps of the neumes were recognised according to this method, the resulting solfège was called "metrophonia". The step between the first neumes at the beginning passed through the protos pentachord between kyrios (a as transcription for α') and plagios phthongos (D as transcription of πλα'): a—Da—a—G—a—G—FGa—a—EF—G—a—acbabcba. The Coislin version seems to end (ἐνιαυτοῦ) thus: EF—G—a—Gba (the klasma indicates that the following kolon continues immediately in the music). In znamennaya notation the combination dyo apostrophoi (dve zapĕtiye) and oxeia (strela) at the beginning (наста) is called "strela gromnaya" and obviously derived from the combination "apeso exo" in Coislin notation. According to the customs of Old Byzantine notation, "apeso exo" was not yet written with "spirits" called "chamile" (down) or "hypsile" (up) which did later specify as pnevmata the interval of a fifth (four steps). As usual the Old Church Slavonic translation of the text deals with less syllables than the Greek verse. The neumes only show the basic structure which was memorised as metrophonia by the use of parallage, not the melos of the performance. The melos depended on various methods to sing an idiomelon, either together with a choir or to ask a soloist to create a rather individual version (changes between soloist and choir were at least common for the period of the 14th century, when the Middle Byzantine sticherarion in this example was created). But the comparison clearly reveals the potential (δύναμις) of the rather complex genre idiomelon.
The background of Antonin's interest in celebrations at the Hagia Sophia of Constantinople, as they had been documented by his description of the ceremony around Christmas and Theophany in 1200, were diplomatic exchanges between Novgorod and Constantinople.
In the Primary Chronicle (Повѣсть времѧньныхъ лѣтъ "tale of passed years") it is reported, how a legacy of the Rus' was received in Constantinople and how they did talk about their experience in presence of Vladimir the Great in 987, before the Grand Prince Vladimir decided about the Christianization of the Kievan Rus' (Laurentian Codex written at Nizhny Novgorod in 1377):
On the morrow, the Byzantine emperor sent a message to the patriarch to inform him that a Russian delegation had arrived to examine the Greek faith, and directed him to prepare the church Hagia Sophia and the clergy, and to array himself in his sacerdotal robes, so that the Russians might behold the glory of the God of the Greeks. When the patriarch received these commands, he bade the clergy assemble, and they performed the customary rites. They burned incense, and the choirs sang hymns. The emperor accompanied the Russians to the church, and placed them in a wide space, calling their attention to the beauty of the edifice, the chanting, and the offices of the archpriest and the ministry of the deacons, while he explained to them the worship of his God. The Russians were astonished, and in their wonder praised the Greek ceremonial. Then the Emperors Basil and Constantine invited the envoys to their presence, and said, "Go hence to your native country," and thus dismissed them with valuable presents and great honor. Thus they returned to their own country, and the prince called together his vassals and the elders. Vladimir then announced the return of the envoys who had been sent out, and suggested that their report be heard. He thus commanded them to speak out before his vassals. The envoys reported: "When we journeyed among the Bulgars, we beheld how they worship in their temple, called a mosque, while they stand ungirt. The Bulgarian bows, sits down, looks hither and thither like one possessed, and there is no happiness among them, but instead only sorrow and a dreadful stench. Their religion is not good. Then we went among the Germans, and saw them performing many ceremonies in their temples; but we beheld no glory there. Then we went on to Greece, and the Greeks led us to the edifices where they worship their God, and we knew not whether we were in heaven or on earth. For on earth there is no such splendor or such beauty, and we are at a loss how to describe it. We know only that God dwells there among men, and their service is fairer than the ceremonies of other nations. For we cannot forget that beauty. Every man, after tasting something sweet, is afterward unwilling to accept that which is bitter, and therefore we cannot dwell longer here.
There was obviously also an interest in the representative aspect of those ceremonies at the Hagia Sophia of Constantinople. Today, it is still documented by seven Slavic kondakar's:
Six of them had been written in scriptoria of Kievan Rus' during the 12th and the 13th centuries, while there is one later kondakar' without notation which was written in the Balkans during the 14th century. The aesthetic of the calligraphy and the notation has so developed over a span of 100 years that it must be regarded as a local tradition, but also one which provided us with the earliest evidence of the cheironomic signs which had only survived in one later Greek manuscript.
In 1147, the chronicler Eude de Deuil described during a visit of the Frankish King Louis VII the cheironomia, but also the presence of eunuchs during the cathedral rite. With respect to the custom of the Missa greca (for the patron of the Royal Abbey of Saint Denis), he reported that the Byzantine emperor sent his clerics to celebrate the divine liturgy for the Frankish visitors:
Novit hoc imperator; colunt etenim Graeci hoc festum, et clericorum suorum electam multitudinem, dato unicuique cereo magno, variis coloribus et auro depicto regi transmisit, et solemnitatis gloriam ampliavit. Illi quidem a nostris clericis verborum et organi genere dissidebant, sed suavi modulatione placebant. Voces enim mistae, robustior cum gracili, eunucha videlicet cum virili (erant enim eunuchi multi illorum), Francorum animos demulcebant. Gestu etiam corporis decenti et modesto, plausu manuum, et inflexione articulorum, jucunditatem visibus offerebant.
Since the emperor realised, that the Greeks celebrate this feast, he sent to the king a selected group of his clergy, each of whom he had equipped with a large taper [votive candle] decorated elaborately with gold and a great variety of colours; and he increased the glory of the ceremony. Those differed from our clerics concerning the words and the order of service, but they pleased us with sweet modulations. You should know that the mixed voices are more stable but with grace, the eunuchs appear with virility (for many of them were eunuchs), and softened the hearts of the Franks. Through a decent and modest gesture of the body, clapping of hands and flexions of the fingers they offered us a vision of gentleness.
The Kievan Rus' obviously cared about this tradition, but especially about the practice of cheironomia and its particular notation: the so-called "kondakarian notation". A comparison with Easter koinonikon proves two things: the Slavic kondakar' did not correspond to the "pure" form of the Greek kontakarion which was the book of the soloist who had also to recite the larger parts of the kontakia or kondaks. It was rather a mixed form which included also the choir book (asmatikon), since there is no evidence that such an asmatikon had ever been used by clerics of the Rus', while the kondakarian notation integrated the cheironomic signs with simple signs, a Byzantine convention which had only survived in one manuscript (GR-KA Ms. 8), and combined it with Old Slavic znamennaya notation, as it had been developed in the sticheraria and heirmologia of the 12th century and the so-called Tipografsky Ustav.
Although the common knowledge of znamennaya notation is as limited as the one of other Old Byzantine variants such as Coislin and Chartres notation, a comparison with the asmatikon Kastoria 8 is a kind of bridge between the former concept of cheironomiai as the only authentic notation of the cathedral rite and the hand signs used by the choir leaders and the later concept of great signs integrated and transcribed into Middle Byzantine notation, but it is a pure form of the choir book, so that such comparison is only possible for an asmatic chant genre such as the koinonikon.
See for instance the comparison of the Easter koinonikon between the Slavic Blagoveščensky kondakar' which was written about 1200 in the Northern town Novgorod of the Rus', its name derived from its preservation at the collection of the Blagoveščensky monastery [ru] at Nizhny Novgorod.
The comparison should not suggest that both versions are identical, but the earlier source documents an earlier reception of the same tradition (since there is a difference about 120 years between both sources it is impossible to judge the differences). The rubric "Glas 4" is most likely an error of the notator and meant "Glas 5", but it is also possible, that the Slavic tone system was already in such an early period organised in triphonia. Thus, it could also mean that анеане, undoubtly the plagios protos enechema ἀνεανὲ, was supposed to be on a very high pitch (about an octave higher), in that case the tetartos phthongos has not the octave species of tetartos (a tetrachord up and a pentachord down), but the one of plagios protos. The comparison also shows very much likeness between the use of asmatic syllables such as "оу" written as one character such as "ꙋ". Tatiana Shvets in her description of the notational style also mentions the kola (frequent interpunction within the text line) and medial intonations can appear within a word which was sometimes due to the different numbers of syllables within the translated Slavonic text. A comparison of the neumes also show many similarities to Old Byzantine (Coislin, Chartres) signs such as ison (stopica), apostrophos (zapĕtaya), oxeia (strela), vareia (palka), dyo kentemata (točki), diple (statĕya), klasma (čaška), the krusma (κροῦσμα) was actually an abbreviation for a sequence of signs (palka, čaška with statĕya, and točki) and omega "ω" meant a parakalesma, a great sign related to a descending step (see the echema for plagios protos: it is combined with a dyo apostrophoi called "zapĕtaya").
Another very modern part of the Blagoveščensky kondakar' was a Polyeleos composition (a post-Stoudites custom, since they imported the Great Vesper from Jerusalem) about the psalm 135 which was divided into eight sections, each one in another glas:
The refrain алелɤгιа · алелɤгιа · ананҍанҍс · ꙗко въ вҍкы милость ѥго · алелɤгιа ("Alleluia, alleluia. medial intonation For His love endureth forever. Alleluia.") was only written after a medial intonation for the conclusion of the first section. "Ananeanes" was the medial intonation of echos protos (glas 1). This part was obviously composed without modulating to the glas of the following section. The refrain was likely sung by the right choir after the intonation of its leader: the domestikos, the preceding psalm text probably by a soloist (monophonaris) from the ambo. Interesting is that only the choir sections are entirely provided with cheironomiai. Slavic cantors had been obviously trained in Constantinople to learn the hand signs which corresponded to the great signs in the first row of Kondakarian notation, while the monophonaris parts had them only at the end, so that they were probably indicated by the domestikos or lampadarios in order to get the attention of the choir singers, before singing the medial intonations.
We do not know, whether the whole psalm was sung or each section at another day (during the Easter week, for instance, when the glas changed daily), but the following section do not have a written out refrain as a conclusion, so that the first refrain of each section was likely repeated as conclusion, often with more than one medial intonation which indicated, that there was an alternation between the two choirs. For instance within the section of glas 3 (the modal signature was obviously forgotten by the notator), where the text of the refrain is almost treated like a "nenanismaton": "але-нь-н-на-нъ-ъ-на-а-нъ-ı-ъ-лɤ-гı-а". The following medial intonations "ипе" (εἴπε "Say!") and "пал" (παλὶν "Again!") obviously did imitate medial intonations of the asmatikon without a true understanding of their meaning, because a παλὶν did usually indicate that something will be repeated from the very beginning. Here one choir did obviously continue another one, often interrupting it within a word.
1207, when the Uspensky kondakar' was written, the traditional cathedral rite had no longer survived in Constantinople, because the court and the patriarchate had gone into exile to Nikaia in 1204, after Western crusaders had made it impossible to continue the local tradition. The Greek books of the asmatikon (choir book) and the other one for the monophonaris (the psaltikon which often included the kontakarion) were written outside Constantinople, on the island of Patmos, at Saint Catherine's monastery, on the Holy Mount Athos and in Italy, in a new notation which developed some decades later within the books sticherarion and heirmologion: Middle Byzantine round notation. Thus, also the book kontakarion-psaltikon dedicated to the Constantinopolitan cathedral rite must be regarded as part of its reception history outside Constantinople like the Slavic kondakar'.
The reason, why the psaltikon was called "kontakarion", was that most parts of a kontakion (except of the refrain) were sung by a soloist from the ambo, and that the collection of the kontakarion had a prominent and dominant place within the book. The classical repertoire, especially the kontakion cycle of the movable feasts mainly attributed to Romanos, included usually about 60 notated kontakia which were obviously reduced to the prooimion and the first oikos and this truncated form is commonly regarded as a reason, why the notated form presented a melismatic elaboration of the kontakion as it was commonly celebrated during the cathedral rite at the Hagia Sophia. As such within the notated kontakarion-psaltikon the cycle of kontakia was combined with a prokeimenon and alleluiarion cycle as a proper chant of the divine liturgy, at least for more important feasts of the movable and immovable cycle. Since the Greek kontakarion has only survived with Middle Byzantine notation which developed outside Constantinople after the decline of the cathedral rite, the notators of these books must have integrated the cheironomiai or great signs still present in the Slavic kondakar's within the musical notation of the new book sticherarion.
The typical composition of a kontakarion-psaltikon (τὸ ψαλτικὸν, τὸ κοντακάριον) was:
The choral sections had been collected in a second book for the choir which was called asmatikon (τὸ ᾀσματικὸν). It contained the refrains (dochai) of the prokeimena, troparia, sometimes the ephymnia of the kontakia and the hypakoai, but also ordinary chant of the divine liturgy like the eisodikon, the trisagion, the choir sections of the cherubikon asmatikon, the weekly and annual cycle of koinonika. There were also combined forms as a kind of asmatikon-psaltikon.
In Southern Italy, there were also mixed forms of psaltikon-asmatikon which preceded the Constantinopolitan book "akolouthiai":
Nevertheless, the Greek monastic as well as the Slavic reception within the Kievan Rus' show many coincidences within the repertoire, so that even kontakia created in the North for local customs could be easily recognised by a comparison of Slavonic kondakar's with Greek psaltika-kontakaria. Constantin Floros' edition of the melismatic chant proved that the total repertoire of 750 kontakia (about two thirds composed since the 10th century) was based on a very limited number of classical melodies which served as model for numerous new compositions: he counted 42 prooimia with 14 prototypes which were used as a model for other kontakia, but not rubrified as avtomela, but as idiomela (28 of them remained more or less unique), and 13 oikoi which were separately used for the recitation of oikoi. The most frequently used models also generated a prosomoion-cycle of eight kontakia anastasima. The repertoire of these melodies (not so much their elaborated form) was obviously older and was transcribed by echemata in Middle Byzantine notation which were partly completely different from those used in the sticherarion. While the Hagiopolites mentioned 16 echoi of the cathedral rite (four kyrioi, four plagioi, four mesoi and four phthorai), the kontakia-idiomela alone represent at least 14 echoi (four kyrioi in devteros and tritos represented as mesos forms, four plagioi, three additional mesoi and three phthorai).
The integrative role of Middle Byzantine notation becomes visible that a lot of echemata were used which were not known from the sticherarion. Also the role of the two phthorai known as the chromatic νενανῶ and the enharmonic νανὰ was completely different from the one within the Hagiopolitan Octoechos, phthora nana clearly dominated (even in devteros echoi), while phthora nenano was rarely used. Nothing is known about the exact division of the tetrachord, because no treatise concerned with the tradition the cathedral rite of Constantinople has survived, but the Coislin sign of xeron klasma (ξηρὸν κλάσμα) appeared on different pitch classes (phthongoi) than within the stichera idiomela of the sticherarion.
The Slavic kondakar's did only use very few oikoi pointing at certain models, but the text of the first oikos was only written in the earliest manuscript known as Tipografsky Ustav, but never provided with notation. If there was an oral tradition, it probably did not survive until the 13th century, because the oikoi are simply missing in the kondakar's of that period.
One example for an kondak-prosomoion whose music can be only reconstructed by a comparison with model of the kontakion as it has been notated into Middle Byzantine round notation, is Аще и убьѥна быста which was composed for the feast for Boris and Gleb (24 July) over the kondak-idiomelon Аще и въ гробъ for Easter in echos plagios tetartos:
The two Middle Byzantine versions in the kontakarion-psaltikon of Paris and the one of Sinai are not identical. The first kolon ends on different phthongoi: either on plagios tetartos (C, if the melos starts there) or one step lower on the phthongos echos varys, the plagios tritos called "grave echos" (a kind of B flat). It is definitely exaggerated to pretend that one has "deciphered" Kondakarian notation, which is hardly true for any manuscript of this period. But even considering the difference of about at least 80 years which lie between the Old Byzantine version of Slavic scribes in Novgorod (second row of the kondakar's) and the Middle Byzantine notation used by the monastic scribes of the later Greek manuscripts, it seems obvious that all three manuscripts in comparison did mean one and the same cultural heritage associated with the cathedral rite of the Hagia Sophia: the melismatic elaboration of the truncated kontakion. Both Slavonic kondaks follow strictly the melismatic structure in the music and the frequent segmentation by kola (which does not exist in the Middle Byzantine version), interrupting the conclusion of the first text unit by an own kolon using with the asmatic syllable "ɤ".
Concerning the two martyre princes of the Kievan Rus' Boris and Gleb, there are two kondak-prosomoia dedicated to them in the Blagoveščensky Kondakar' on the folios 52r–53v: the second is the prosomoion over the kondak-idiomelon for Easter in glas 8, the first the prosomoion Въси дьньсь made over the kondak-idiomelon for Christmas Дева дньсь (Ἡ παρθένος σήμερον) in glas 3. Unlike the Christmas kontakion in glas 3, the Easter kontakion was not chosen as model for the kontakion anastasimon of glas 8 (plagios tetartos). It had two other important rivals: the kontakion-idiomelon Ὡς ἀπαρχάς τῆς φύσεως (ꙗко начатъкы родоу) for All Saints, although an enaphonon (protos phthongos) which begins on the lower fourth (plagios devteros), and the prooimion Τῇ ὑπερμάχῳ στρατιγῷ (Възбраньноумоу воѥводѣ побѣдьнаꙗ) of the Akathistos hymn in echos plagios tetartos (which only appears in Greek kontakaria-psaltika).
Even among the notated sources there was a distinction between the short and the long psaltikon style which was based on the musical setting of the kontakia, established by Christian Thodberg and by Jørgen Raasted. The latter chose Romanos' Christmas kontakion Ἡ παρθένος σήμερον to demonstrate the difference and his conclusion was that the known Slavic kondakar's did rather belong to the long psaltikon style.
There was a discussion promoted by Christian Troelsgård that Middle Byzantine notation should not be distinguished from Late Byzantine notation. The argument was that the establishment of a mixed rite after the return of the court and the patriarchate from the exile in Nikaia in 1261, had nothing really innovative with respect the sign repertoire of Middle Byzantine notation. The innovation was probably already done outside Constantinople, in those monastic scriptoria whose scribes cared about the lost cathedral rite and did integrate different forms of Old Byzantine notation (those of the sticherarion and heirmologion like theta notation, Coislin and Chartres type as well as those of the Byzantine asmatikon and kontakarion which were based on cheironomies). The argument was mainly based on the astonishing continuity that a new a type of treatise revealed by its continuous presence from the 13th to the 19th centuries: the Papadike. In a critical edition of this huge corpus, Troelsgård together with Maria Alexandru discovered many different functions that this treatise type could have. It was originally an introduction for a revised type of sticherarion, but it also introduced many other books like mathemataria (literally "a book of exercises" like a sticherarion kalophonikon or a book with heirmoi kalophonikoi, stichera kalophonika, anagrammatismoi and kratemata), akolouthiai (from "taxis ton akolouthion" which meant "order of services", a book which combined the choir book "asmatikon", the book of the soloist "kontakarion", and with the rubrics the instructions of the typikon) and the Ottoman anthologies of the Papadike which tried to continue the tradition of the notated book akolouthiai (usually introduced by a Papadike, a kekragarion/anastasimatarion, an anthology for Orthros, and an anthology for the divine liturgies).
With the end of creative poetical composition, Byzantine chant entered its final period, devoted largely to the production of more elaborate musical settings of the traditional repertoire: either embellishments of the earlier simpler melodies (palaia "old"), or original music in highly ornamental style (called "kalophonic"). This was the work of the so-called Maïstores, "masters", of whom the most celebrated was St. John Koukouzeles (14th century) as a famous innovator in the development of chant. The multiplication of new settings and elaborations of the traditional repertoire continued in the centuries following the fall of Constantinople.
One part of this process was the redaction and limitation of the present repertoire given by the notated chant books of the sticherarion (menaion, triodion, pentekostarion, and oktoechos) and the heirmologion during the 14th century. Philologists called this repertoire the "standard abridged version" and counted alone 750 stichera for the menaion-part, and 3300 odes of the heirmologion.
Chronological research of the books sticherarion and heirmologion did not only reveal an evolution of notation systems which were just invented for these chant books, they can be also studied with respect to the repertoire of heirmoi and of stichera idiomela. The earliest evolution of sticherarion and heirmologion notation was the explanation of the theta (Slav. fita), oxeia or diple which were simply set under a syllable, where a melisma was expected. These explanations were either written with Coislin (scriptoria of monasteries under administration of the Patriarchates Jerusalem and Alexandria) or with Chartres notation (scriptoria in Constantinople or on Mount Athos). Both notations went through different stages. Since the evolution of the Coislin system also aimed a reduction of signs in order to define the interval value by less signs in order to avoid a confusion with an earlier habit to use them, it was favoured in comparison with the more complex and stenographic Chartres notation by later scribes during the late 12th century. The standard round notation (also known as Middle Byzantine notation) combined signs of both Old Byzantine notation systems during the 13th century. Concerning the repertoire of unique compositions (stichera idiomela) and models of canon poetry (heirmoi), scribes increased their number between the 12th and 13th century. The Middle Byzantine redaction of the 14th century reduced this number within a standard repertoire and tried to unify the many variants, sometimes offering only a second variant notated in red ink. Since the 12th century also prosomoia (texts composed over well-known avtomela) had been increasingly written down with notation, so that a former local oral tradition to apply psalmody to the evening (Ps 140) and the Laud psalm (Ps 148) became finally visible in these books.
The characteristic of these books is that their collection were over-regional. The probably oldest fully notated chant book is the heirmologion of the Great Lavra on Mount Athos (GR-AOml Ms. β 32) which has been written about the turn to the 11th century. With 312 folios it has much more canons than later redactions notated in Middle Byzantine notation. It was notated in archaic Chartres notation and was organised in canon order. Each canon within an echos section was numbered through and has detailed ascriptions concerning the feast and the author who was believed to have composed poetry and music of the heirmos:
|canon order||GR-AOml Ms. β 32||F-Pn Coislin 220|
|πλάγιος τοῦ πρώτου||41||156v-191v||20||124r-148r|
|πλάγιος τοῦ δευτέρου||53||192r-240r||23||149r-176r|
|πλάγιος τοῦ τετάρτου||54||263r-312v||24||198r-235v|
In exceptional cases, some of these canons were marked as prosomoia and written out with notation. In comparison, later heirmologia just notated the heirmoi with the text they were remembered (referred by an incipit), while the akrosticha composed over the model of the heirmos had been written in the text book menaion. Already the famous heirmologion of Paris, Ms. 220 of the fonds Coislin which gave the name to "Coislin notation" and written about 100 years later, seems to collect almost half the number of heirmoi. But within many heirmoi there are one or even two alternative versions (ἄλλος "another one") inserted directly after certain odes, not just with different neumes, but also with different texts. It seems that several former heirmoi of the same author or written for the same occasion had been summarised under one heirmos and some of the odes of the canon could be replaced by others. But the heirmoi for one and the same feast offered the option to singers to choose between different schools (the Sabaite represented by Andrew, Cosmas and John "the monk" and his nephew Stephen, the Constantinopolitan represented by Patriarch Germanos, and the one of Jerusalem by George of Nicomedia and Elias), different echoi, and even different heirmoi of the same author.
Apart from this canonisation which can be observed in the process of redaction between the 12th and 14th centuries, one should also note that the table above compares two different redactions between the 11th and 12th centuries: the one of Constantinople and Athos (Chartres notation) and another one at the scriptoria of Jerusalem (especially the Patriarchate and the Monastery of Saint Sabbas) and Sinai within the Patriarchate of Alexandria written in Coislin notation. Within the medium of Middle Byzantine notation which combined signs stemming from both Old Byzantine notation systems, there was a later process of unification during the 14th century, which combined both redactions, a process which was preceded by the dominance of Coislin notation by the end of the 12th century, when the more complex Chartres notation came out of use, even at Constantinopolitan scriptoria.
To a certain degree there may be found remnants of Byzantine or early (Greek-speaking, Orthodox Christian) near eastern music in the music of the Ottoman Court. Examples such as that of the composer and theorist Prince Cantemir of Romania learning music from the Greek musician Angelos, indicate the continuing participation of Greek speaking people in court culture. The influences of ancient Greek basin and the Greek Christian chants in the Byzantine music as origin, are confirmed. Music of Turkey was influenced by Byzantine music, too (mainly in the years 1640–1712). Ottoman music is a synthesis, carrying the culture of Greek and Armenian Christian chant. It emerged as the result of a sharing process between the many civilizations that met together in the Orient, considering the breadth and length of duration of these empires and the great number of ethnicities and major or minor cultures that they encompassed or came in touch with at each stage of their development.
Chrysanthos of Madytos (c. 1770–1846), Gregory the Protopsaltes (c. 1778 – c. 1821), and Chourmouzios the Archivist were responsible for a reform of the notation of Greek ecclesiastical music. Essentially, this work consisted of a simplification of the Byzantine Musical Symbols that, by the early 19th century, had become so complex and technical that only highly skilled chanters were able to interpret them correctly. The work of the three reformers is a landmark in the history of Greek Church music, since it introduced the system of neo-Byzantine music upon which are based the present-day chants of the Greek Orthodox Church. Unfortunately, their work has since been misinterpreted often, and much of the oral tradition has been lost.
The Ison (music) is a drone note, or a slow-moving lower vocal part, used in Byzantine chant and some related musical traditions to accompany the melody. It is assumed that the ison was first introduced in Byzantine practice in the 16th century.
The practice of Terirem is vocal improvisation with nonsense syllables. It can contain syllables like "te ri rem" or "te ne na", sometimes enriched with some theological words. It is a custom for a choir, or an orthodox psalmist to start the chanting by finding the musical tone by singing at the very beginning a "ne-ne".
Simon Karas (1905–1999) began an effort to assemble as much material as possible in order to restore the apparently lost tradition. His work was continued by his students Lycourgos Angelopoulos and Ioannis Arvanitis who both had a quite independent and different approach to the tradition.
Lycourgos Angelopoulos died on 18 May 2014, but during his life-time he always perceived himself more as a student than a teacher, despite of the great number of his students and followers and the great success he enjoyed as a teacher. He published some essays where he explained the role that his teacher Simon Karas had for his work. He studied the introduction of the New Method under the aspect which were the Middle Byzantine neumes that had been abandoned by Chrysanthos, when he introduced the New Method. In particular, he discussed the role of Petros Ephesios, the editor of the first print editions who still used the qualitative sign of "oxeia" which had been soon abandoned. In collaboration with Georgios Konstantinou who wrote a new manual and introduction for his school, Lycourgos Angelopoulos re-introduced certain aphonic signs and re-interpreted them as ornamental signs according to the definitive rhythmic interpretation of the New Method which had transcribed the melos into notation. Thus, he had to provide for the whole repertoire of the living tradition an own handwritten edition which had been printed for all his students. For the proper understanding, the new universal notation according to Chrysanthos could be used to transcribe any kind of Ottoman music, not only the church music composed according to the oktoechos melopœia, but also makam music and rural traditions of the Mediterranean. Thus, the whole ornamental aspect of monophonic music depended now on an oral tradition, but it was no longer represented by the aphonic or great signs which had to be understood from the traditional context rooting back to the Byzantine psaltic art. Therefore, the other fundament of Angelopoulos' school was the participating fieldwork of traditional protopsaltes, those of the archon protopsaltes of the Ecumenical Patriarchate in Constantinople (and many of them had been forced into exile since the Cyprus crisis of 1964), and Athonite singers, especially those recordings he made of Father Dionysios Firfiris.
Two major styles of interpretation have evolved, the Hagioritic, which is simpler and is mainly followed in monasteries, and the Patriarchal, as exemplified by the style taught at the Great Church of Constantinople, which is more elaborate and is practised in parish churches. Nowadays the Orthodox churches maintain chanting schools in which new cantors are trained. Each diocese employs a protopsaltes ("first cantor"), who directs the diocesan cathedral choir and supervises musical education and performance. The protopsaltes of the Patriarchates are given the title Archon Protopsaltes ("Lord First Cantor"), a title also conferred as an honorific to distinguished cantors and scholars of Byzantine music.
While Angelopoulos' school basically stuck to the transcriptions of Chourmouzios the Archivist who did transcribe as one of the great teachers also the Byzantine repertoire according to the New Method during the beginning of the 19th century, another student of Karas Ioannis Arvanitis developed an autonomous approach which allowed him to study the older sources written in Middle Byzantine notation.
Ioannis Arvanitis published his ideas in several essays and in a doctoral thesis. He founded several ensembles like Aghiopolitis which performed the tradition of the Byzantine cathedral rite based on his own study of medieval kontakaria and asmatika in Italy, or got involved in collaborations with other ensembles whose singers were instructed by him, such as Cappella Romana directed by Alexander Lingas, Ensemble Romeiko directed by Yorgos Bilalis or Vesna Sara Peno who studied with Ioannis Arvanitis, before the she founded an own Ensemble dedicated Saint Kassia and to the Old Church Slavonic repertoire according to the Serbian tradition of the Athonite Hilandar Monastery.
For more on the theory of Byzantine music and its cultural relatives in Greek-speaking peoples see:
For collections of Byzantine hymnography see:
For contemporary works featuring Byzantine chant see: |
2nd Grade Math Worksheets Adding Two Digit Numbers – This is one of the most used online math worksheets: Math Worksheet Printable Dominoes for Elementary Children. This printable math worksheet is great for elementary and kindergarten students. It helps them practice addition, subtraction, and division. It teaches the whole lesson and includes fun activities and games on one page.
Math Worksheet Printable- Uses for a Math Worksheet
With the free 2nd Grade Math Worksheets Adding Two Digit Numbers, your children learn addition, subtraction, and multiplication by laying out the different shapes and colors on the worksheet, and then they must choose the correct answer. You can also write your answers down with a pen. The virtual magnifier allows you to see the various shapes and colors. This product contains a worksheet with math and an activity set that encourages children to use the different colors and shapes to solve problems.
Try Our FREE Worksheet For Double Digit Addition
Next is the Free Teacher Resources – Grade Mathematics. This teacher resource is thirty-six pages long and provides learning activities for all grades. It also includes a grade guide, printable math worksheets each day, teacher quotations, and worksheets covering various topics, such as fractions decimals, percentages, etc. Grade eight lessons include an activities center with color-coded pages for various topics such as colors, animals, nature, colors around us, and more. This worksheet can be used with the free teacher resource.
A thirty-one-page worksheet is included in the Free Teacher Resource – Subtraction. Students learn subtraction by developing counting skills. This worksheet uses a standard sheet of paper and includes a variety of different math problems including division, percentage, times tables, cube solutions, permutations, and denominators. The worksheet also contains blank spaces where students can enter in their answers. These blank spaces serve to improve counting skills.
You can also download the Free Teacher Resources – Roman Numerals worksheet. This is a thirty-two-page worksheet that includes basic addition, subtraction, sorting, division, averages, and multiplication/division tables. On the back of each lesson there is a question and answer section. This worksheet uses Roman numerals, which are much easier to understand than standard type-writing numbers.
There are many other resources for math worksheets. These worksheets can be used in a variety of colors and shapes. Some even use different fonts. By using these worksheets students are able to develop different types of counting and subtraction skills. They are also able to develop their handwriting. Students are learning essential concepts and skills through math worksheets. They also have fun doing it. |
Measuring the masses of brown dwarfs—the lightest objects ever weighed outside the Solar System—has been a painstaking process that would have been impossible without ultra-sharp images taken with the Keck II Telescope and its world-leading adaptive optics system. These images have such high angular resolution that if a human’s eyes could act like the Keck’s adaptive optics system, he or she would be able to read a magazine from a mile away.
The positional accuracy achieved with such sharp images has enabled astronomers Michael Liu and Trent Dupuy of the Institute for Astronomy at the University of Hawai’i and Michael Ireland of the University of Sydney to determine—for the first time ever—the masses of the coldest brown dwarfs. Interestingly, their results are somewhat at odds with current theoretical predictions, challenging astronomers’ understanding of such cold objects.
It’s official—Mars has gas. Scientists recently confirmed that certain regions of the planet release methane into the Martian atmosphere. These findings open new questions about whether life exists on the Red Planet.
Earlier research suggested methane existed in the Martian atmosphere, but the results were ambiguous. Now, Michael Mumma of the NASA Goddard Spaceflight Center in Greenbelt, Md. and his colleagues have carefully observed the planet for three Martian years—the equivalent of seven Earth years. Using the W. M. Keck and the NASA Infrared telescopes atop Mauna Kea, Hawai’i, the team zeroed in on the atmosphere of the Red Planet, which ranges between 36 million to over 250 million miles from Earth depending on the planets’ orbits. The new data definitely shows that Mars is alive either biologically or geochemically.
But since astronomers can’t yet tell if the methane is a byproduct of biological or geological processes, it is almost as if the planet is “egging us on and challenging us by saying, ‘Hey, find out what this means’,” Mumma says.
Atmospheric turbulence causes stars to twinkle and blurs cosmic images making it difficult for astronomers to know if they are looking at one object or two, or even more. This turbulence trips up even the largest telescopes, including the twin Kecks.
Veteran observers Andrea Ghez of the University of California, Los Angeles and Claire Max of the University of California Santa Cruz (UCSC), however, have, respectively advanced and pioneered techniques to make the stars stop twinkling — at least from Keck’s perspective. For this and other work, Ghez has been named a MacArthur Genius, and Max has earned Princeton’s Madison Medal.
Ghez was inducted into the MacArthur Fellows Program in September 2008. The program encourages writers, scientists, artists, social scientists, humanists, teachers, entrepreneurs and others of outstanding talent to pursue their own creative, intellectual and professional goals. Ghez has spent almost a decade exploiting two techniques, speckle imaging, which digitally combines very short telescopic exposures, and adaptive optics, which corrects for atmospheric turbulence to map the movement of a group of stars.
These stars sit in the Sagittarius constellation near the center of our Milky Way galaxy. From her team’s observations, Ghez discovered that some of the stars orbit the Galactic Center at velocities that are fractions of the speed of light. The stars’ motions provide the strongest evidence for the theory that a supermassive black hole sits in the center of the Milky Way.
“The study of the black hole at the Galactic Center by Dr. Ghez is clearly one of the most impactful results that Keck Observatory has produced,” says Taft Armandroff, director of the Observatory. “To me, this work underscores the discovery potential of adaptive optics and observational programs spanning many years. Dr. Ghez’s award is well deserved.”
Winning the fellowship will allow Ghez to take more risks and pursue new ideas and areas in her research, she says. One of her ideas is to detect dark matter at the center of the Milky Way. She is also interested in studying the center of globular clusters to look for the elusive intermediate mass black hole and in studying the center of other galaxies to understand star formation in extreme environments outside our galaxy.
But, Ghez says, “right now there is still so very much to do at the center of our galaxy.” She is now focusing her research on understanding how the galaxy’s central black hole interacts with the stars, gas and dust that surround it.
Same ‘star’, different picture
Claire Max also makes ground-based telescopes see more clearly with adaptive optics. She is a co-inventor of the laser guide star adaptive optics systems used for astronomical research. For her work in this field, and her study of plasma physics, astronomy and astronomical instrumentation, Princeton University has honored Max with the 2009 James Madison Medal.
Early in her career, Max studied laser fusion at Lawrence Livermore National Laboratory, where she focused on laser-plasma interactions. Now, she directs the Center for Adaptive Optics, which is headquartered at UCSC. For her personal research, Max uses adaptive optics to study merging black holes at the centers of galaxies.
“Without Max’s leadership in implementing adaptive optics at Keck, many of our greatest contributions would not have happened. We join Princeton University in taking pride in such an impactful scientist,” Armandroff says.
Max earned her PhD from Princeton in 1972. The Association of Princeton Graduate Alumni awards the Madison Medal each year to a graduate student alumnus who has had a distinguished career, advanced the cause of graduate education or achieved an outstanding record of public service. Max is the first woman to receive the award, which is named for the fourth US president who many consider to be Princeton’s first graduate student. She received her medal and delivered an address during Princeton’s Alumni Day on Feb. 21, 2009.
W. M. Keck Observatory invited throngs of visitors to experience its world-leading astronomy enterprise during a greatly anticipated Open House on October 12, 2008. Advancement services coordinator Joan Campbell and executive assistant Leslie Kissner coordinated the event, which was held at company headquarters in Waimea. Throughout the day, Keck’s professional staff engaged guests in activites letting them explore the Observatory’s astronomy research and technological advancements.
Local resident Sharon Petrosky says she felt “connected to something bigger,” as she, along with hundreds of other visitors, gained an appreciation of the world-class Observatory located in their backyard. “I am learning so much about light, about space, about sound. Sound, for instance, is simply a vibration,” Petrosky says. “I’ve had so many compelling revelations today.”
The power of revelation was in full force at the Observatory, giving credence to this year’s theme— Welcome to the Edge of Discovery.
The stroke of midnight on Jan. 1, 2009 heralded the traditional celebrations of New Year’s Eve. It also launched the first global celebration of modern astronomy. Known as the International Year of Astronomy, or IYA 2009, the event has inspired organizations and institutions around the world to host activities commemorating the first 400 years of modern astronomy—an era that began in 1609 when Galileo first turned his telescope to the stars.
Keck Observatory kicked off its own IYA celebration during its well-attended Open House on October 12, 2008. The Observatory is also very proud to host the 2009 Maunakea Lecture Series to commemorate IYA in a year long program that shares with listeners the world class research taking place on Mauna Kea. The directors of the Mauna Kea Observatories will give the monthly lectures at the Observatory’s headquarters in Waimea and at the Imiloa Astronomy Center in Hilo throughout 2009. The talks offer a chance for the speakers to engage the public in a discussion about the research taking place at their respective facilities, inspiring audience members to embrace the Year’s central theme—The Universe, Yours to Discover.
On Jan. 15, Chad Kalepa Baybayan, the ‘Imiloa Astronomy Center’s Navigator in Residence, gave the inaugural 2009 Maunakea Lecture at Keck Observatory’s Hualalai Learning Theater. Baybayan’s talk, “Traditional Hawaiian Navigation and Sky Lore,” discussed how early Hawaiians used their powers of observation to understand the movement of the stars, as well as the conditions of the ocean and environment, to navigate the Pacific Ocean.
Worldwide more than 130 countries have planned events to let citizens appreciate astronomy and its contributions to society and culture. The United Nations Educational, Scientific and Cultural Organization and the International Astronomical Union, which initiated IYA, hosted the Year’s kick-off party on Jan. 15 and Jan. 16 in Paris. Close to 800 government representatives, diplomats, scientists, astronomy undergraduates, astronauts, industrialists and artists mingled and listened to Nobel laureates’ thoughts on astronomy and on the humbling power that observing the heavens can have for all of humanity.
For information on upcoming Hawai’i Island, national and international IYA 2009 events, click the following links to see the Mauna Kea Observatories Outreach Committee’s list, the US IYA2009 Web site and the international IYA2009 Web site. |
Hair is a protein filament that grows from follicles found in the dermis. Hair is one of the defining characteristics of mammals. The human body, apart from areas of glabrous skin, is covered in follicles which produce thick terminal and fine vellus hair. Most common interest in hair is focused on hair growth, hair types, and hair care, but hair is also an important biomaterial primarily composed of protein, notably alpha-keratin.
Attitudes towards different forms of hair, such as hairstyles and hair removal, vary widely across different cultures and historical periods, but it is often used to indicate a person's personal beliefs or social position, such as their age, sex, or religion.
The word "hair" usually refers to two distinct structures:
- the part beneath the skin, called the hair follicle, or, when pulled from the skin, the bulb or root. This organ is located in the dermis and maintains stem cells, which not only re-grow the hair after it falls out, but also are recruited to regrow skin after a wound.
- the shaft, which is the hard filamentous part that extends above the skin surface. A cross section of the hair shaft may be divided roughly into three zones.
Hair fibers have a structure consisting of several layers, starting from the outside:
- the cuticle, which consists of several layers of flat, thin cells laid out overlapping one another as roof shingles
- the cortex, which contains the keratin bundles in cell structures that remain roughly rod-like
- the medulla, a disorganized and open area at the fiber's center
Each strand of hair is made up of the medulla, cortex, and cuticle. The innermost region, the medulla, is not always present and is an open, unstructured region. The highly structural and organized cortex, or second of three layers of the hair, is the primary source of mechanical strength and water uptake. The cortex contains melanin, which colors the fiber based on the number, distribution and types of melanin granules. The shape of the follicle determines the shape of the cortex, and the shape of the fiber is related to how straight or curly the hair is. People with straight hair have round hair fibers. Oval and other shaped fibers are generally more wavy or curly. The cuticle is the outer covering. Its complex structure slides as the hair swells and is covered with a single molecular layer of lipid that makes the hair repel water. The diameter of human hair varies from 0.017 to 0.18 millimeters (0.00067 to 0.00709 in). There are two million small, tubular glands and sweat glands that produce watery fluids that cool the body by evaporation. The glands at the opening of the hair produce a fatty secretion that lubricates the hair.
Hair growth begins inside the hair follicle. The only "living" portion of the hair is found in the follicle. The hair that is visible is the hair shaft, which exhibits no biochemical activity and is considered "dead". The base of a hair's root (the "bulb") contains the cells that produce the hair shaft. Other structures of the hair follicle include the oil producing sebaceous gland which lubricates the hair and the arrector pili muscles, which are responsible for causing hairs to stand up. In humans with little body hair, the effect results in goose bumps.
Root of the hair
|Root of the hair|
The root of the hair ends in an enlargement, the hair bulb, which is whiter in color and softer in texture than the shaft, and is lodged in a follicular involution of the epidermis called the hair follicle. The bulb of hair consists of fibrous connective tissue, glassy membrane, external root sheath, internal root sheath composed of epithelium stratum (Henle's layer) and granular stratum (Huxley's layer), cuticle, cortex and medulla.
All natural hair colors are the result of two types of hair pigments. Both of these pigments are melanin types, produced inside the hair follicle and packed into granules found in the fibers. Eumelanin is the dominant pigment in brown hair and black hair, while pheomelanin is dominant in red hair. Blond hair is the result of having little pigmentation in the hair strand. Gray hair occurs when melanin production decreases or stops, while poliosis is hair (and often the skin to which the hair is attached), typically in spots, that never possessed melanin at all in the first place, or ceased for natural genetic reasons, generally, in the first years of life.
Human hair growth
Hair grows everywhere on the external body except for mucus membranes and glabrous skin, such as that found on the palms of the hands, soles of the feet, and lips.
Hair follows a specific growth cycle with three distinct and concurrent phases: anagen, catagen, and telogen phases; all three occur simultaneously throughout the body. Each has specific characteristics that determine the length of the hair.
The body has different types of hair, including vellus hair and androgenic hair, each with its own type of cellular construction. The different construction gives the hair unique characteristics, serving specific purposes, mainly, warmth and protection.
Hair exists in a variety of textures. Three main aspects of hair texture are the curl pattern, volume, and consistency. The derivations of hair texture are not fully understood. All mammalian hair is composed of keratin, so the make-up of hair follicles is not the source of varying hair patterns. There are a range of theories pertaining to the curl patterns of hair. Scientists have come to believe that the shape of the hair shaft has an effect on the curliness of the individual's hair. A very round shaft allows for fewer disulfide bonds to be present in the hair strand. This means the bonds present are directly in line with one another, resulting in straight hair.
The flatter the hair shaft becomes, the curlier hair gets, because the shape allows more cysteines to become compacted together resulting in a bent shape that, with every additional disulfide bond, becomes curlier in form. As the hair follicle shape determines curl pattern, the hair follicle size determines thickness. While the circumference of the hair follicle expands, so does the thickness of the hair follicle. An individual's hair volume, as a result, can be thin, normal, or thick. The consistency of hair can almost always be grouped into three categories: fine, medium, and coarse. This trait is determined by the hair follicle volume and the condition of the strand. Fine hair has the smallest circumference, coarse hair has the largest circumference, and medium hair is anywhere between the other two. Coarse hair has a more open cuticle than thin or medium hair causing it to be the most porous.
There are various systems that people use to classify their curl patterns. Being knowledgeable of an individual's hair type is a good start to knowing how to take care of one's hair. There is not just one method to discovering one's hair type. Additionally it is possible, and quite normal to have more than one kind of hair type, for instance having a mixture of both type 3a & 3b curls.
- Andre Walker system
The Andre Walker Hair Typing System is the most widely used system to classify hair. The system was created by the hairstylist of Oprah Winfrey, Andre Walker. According to this system there are four types of hair: straight, wavy, curly, kinky.
- Type 1 is straight hair, which reflects the most sheen and also the most resilient hair of all of the hair types. It is hard to damage and immensely difficult to curl this hair texture. Because the sebum easily spreads from the scalp to the ends without curls or kinks to interrupt its path, it is the most oily hair texture of all.
- Type 2 is wavy hair, whose texture and sheen ranges somewhere between straight and curly hair. Wavy hair is also more likely to become frizzy than straight hair. While type A waves can easily alternate between straight and curly styles, type B and C Wavy hair is resistant to styling.
- Type 3 is curly hair known to have an S-shape. The curl pattern may resemble a lowercase "s", uppercase "S", or sometimes an uppercase "Z" or lowercase "z". This hair type is usually voluminous, "climate dependent (humidity = frizz), and damage-prone." Lack of proper care causes less defined curls.
- Type 4 is kinky hair, which features a tightly coiled curl pattern (or no discernible curl pattern at all) that is often fragile with a very high density. This type of hair shrinks when wet and because it has fewer cuticle layers than other hair types it is more susceptible to damage.
|Type 1: Straight|
|1a||Straight (Fine/Thin)||Hair tends to be very soft, thin, shiny, oily, poor at holding curls, difficult to damage.|
|1b||Straight (Medium)||Hair characterized by volume and body.|
|1c||Straight (Coarse)||Hair tends to be bone-straight, coarse, difficult to curl.|
|Type 2: Wavy|
|2a||Wavy (Fine/Thin)||Hair has definite "S" pattern, can easily be straightened or curled, usually receptive to a variety of styles.|
|2b||Wavy (Medium)||Can tend to be frizzy and a little resistant to styling.|
|2c||Wavy (Coarse)||Fairly coarse, frizzy or very frizzy with thicker waves, often more resistant to styling.|
|Type 3: Curly|
|3a||Curly (Loose)||Presents a definite "S" pattern, tends to combine thickness, volume, and/or frizziness.|
|3b||Curly (Tight)||Presents a definite "S" pattern, curls ranging from spirals to spiral-shaped corkscrew|
|Type 4: Kinky|
|4a||Kinky (Soft)||Hair tends to be very wiry and fragile, tightly coiled and can feature curly patterning.|
|4b||Kinky (Wiry)||As 4a but with less defined pattern of curls, looks more like a "Z" with sharp angles|
- FIA system
This is a method which classifies the hair by curl pattern, hair-strand thickness and overall hair volume.
|1b||Straight but with a slight body wave adding some volume.|
|1c||Straight with body wave and one or two visible S-waves (e.g. at nape of neck or temples).|
|2a||Loose with stretched S-waves throughout.|
|2b||Shorter with more distinct S-waves (resembling e.g. braided damp hair).|
|2c||Distinct S-waves, some spiral curling.|
|3a||Big, loose spiral curls.|
|Very ("Really") curly|
|4a||Tightly coiled S-curls.|
|4b||Z-patterned (tightly coiled, sharply angled)|
|4c||Mostly Z-patterned (tightly kinked, less definition)|
Thin strands that sometimes are almost translucent when held up to the light.
Strands are neither fine nor coarse.
Thick strands whose shed strands usually are easily identified.
by circumference of full-hair ponytail
|i||Thin||circumference less than 2 inches (5 centimetres)|
|ii||Normal||... from 2 to 4 inches (5 to 10 centimetres)|
|iii||Thick||... more than 4 inches (10 centimetres)|
Many mammals have fur and other hairs that serve different functions. Hair provides thermal regulation and camouflage for many animals; for others it provides signals to other animals such as warnings, mating, or other communicative displays; and for some animals hair provides defensive functions and, rarely, even offensive protection. Hair also has a sensory function, extending the sense of touch beyond the surface of the skin. Guard hairs give warnings that may trigger a recoiling reaction.
While humans have developed clothing and other means of keeping warm, the hair found on the head serves primarily as a source of heat insulation and cooling (when sweat evaporates from soaked hair) as well as protection from ultra-violet radiation exposure. The function of hair in other locations is debated. Hats and coats are still required while doing outdoor activities in cold weather to prevent frostbite and hypothermia, but the hair on the human body does help to keep the internal temperature regulated. When the body is too cold, the arrector pili muscles found attached to hair follicles stand up, causing the hair in these follicles to do the same. These hairs then form a heat-trapping layer above the epidermis. This process is formally called piloerection, derived from the Latin words 'pilus' ('hair') and 'erectio' ('rising up'), but is more commonly known as 'having goose bumps' in English. This is more effective in other mammals whose fur fluffs up to create air pockets between hairs that insulate the body from the cold. The opposite actions occur when the body is too warm; the arrector muscles make the hair lie flat on the skin which allows heat to leave.
In some mammals, such as hedgehogs and porcupines, the hairs have been modified into hard spines or quills. These are covered with thick plates of keratin and serve as protection against predators. Thick hair such as that of the lion's mane and grizzly bear's fur do offer some protection from physical damages such as bites and scratches.
Displacement and vibration of hair shafts are detected by hair follicle nerve receptors and nerve receptors within the skin. Hairs can sense movements of air as well as touch by physical objects and they provide sensory awareness of the presence of ectoparasites. Some hairs, such as eyelashes, are especially sensitive to the presence of potentially harmful matter.
Eyebrows and eyelashes
The eyebrows provide moderate protection to the eyes from dirt, sweat and rain. They also play a key role in non-verbal communication by displaying emotions such as sadness, anger, surprise and excitement. In many other mammals, they contain much longer, whisker-like hairs that act as tactile sensors.
The eyelash grows at the edges of the eyelid and protects the eye from dirt. The eyelash is to humans, camels, horses, ostriches etc., what whiskers are to cats; they are used to sense when dirt, dust, or any other potentially harmful object is too close to the eye. The eye reflexively closes as a result of this sensation.
Hair has its origins in the common ancestor of mammals, the synapsids, about 300 million years ago. It is currently unknown at what stage the synapsids acquired mammalian characteristics such as body hair and mammary glands, as the fossils only rarely provide direct evidence for soft tissues. Skin impression of the belly and lower tail of a pelycosaur, possibly Haptodus shows the basal synapsid stock bore transverse rows of rectangular scutes, similar to those of a modern crocodile. An exceptionally well-preserved skull of Estemmenosuchus, a therapsid from the Upper Permian, shows smooth, hairless skin with what appears to be glandular depressions, though as a semi-aquatic species it might not have been particularly useful to determine the integument of terrestrial species. The oldest undisputed known fossils showing unambiguous imprints of hair are the Callovian (late middle Jurassic) Castorocauda and several contemporary haramiyidans, both near-mammal cynodonts. More recently, studies on terminal Permian Russian coprolites may suggest that non-mammalian synapsids from that era had fur. If this is the case, these are the oldest hair remnants known, showcasing that fur occurred as far back as the latest Paleozoic.
Some modern mammals have a special gland in front of each orbit used to preen the fur, called the harderian gland. Imprints of this structure are found in the skull of the small early mammals like Morganucodon, but not in their cynodont ancestors like Thrinaxodon.
The hairs of the fur in modern animals are all connected to nerves, and so the fur also serves as a transmitter for sensory input. Fur could have evolved from sensory hair (whiskers). The signals from this sensory apparatus is interpreted in the neocortex, a chapter of the brain that expanded markedly in animals like Morganucodon and Hadrocodium. The more advanced therapsids could have had a combination of naked skin, whiskers, and scutes. A full pelage likely did not evolve until the therapsid-mammal transition. The more advanced, smaller therapsids could have had a combination of hair and scutes, a combination still found in some modern mammals, such as rodents and the opossum.
In varying degrees most mammals have some skin areas without natural hair. On the human body, glabrous skin is found on the ventral portion of the fingers, palms, soles of feet and lips, which are all parts of the body most closely associated with interacting with the world around us, as are the labia minora and glans penis. There are four main types of mechanoreceptors in the glabrous skin of humans: Pacinian corpuscles, Meissner's corpuscles, Merkel's discs, and Ruffini corpuscles.
The naked mole-rat (Heterocephalus glaber) has evolved skin lacking in general, pelagic hair covering, yet has retained long, very sparsely scattered tactile hairs over its body. Glabrousness is a trait that may be associated with neoteny.
This article or section possibly contains synthesis of material which does not verifiably mention or relate to the main topic. (November 2010) (Learn how and when to remove this template message)
The general hairlessness of humans in comparison to related species may be due to loss of functionality in the pseudogene KRTHAP1 (which helps produce keratin) in the human lineage about 240,000 years ago. On an individual basis, mutations in the gene HR can lead to complete hair loss, though this is not typical in humans. Humans may also lose their hair as a result of hormonal imbalance due to drugs or pregnancy.
In order to comprehend why humans are essentially hairless, it is essential to understand that mammalian body hair is not merely an aesthetic characteristic; it protects the skin from wounds, bites, heat, cold, and UV radiation. Additionally, it can be used as a communication tool and as a camouflage. To this end, it can be concluded that benefits stemming from the loss of human body hair must be great enough to outweigh the loss of these protective functions by nakedness.
Humans are the only primate species that have undergone significant hair loss and of the approximately 5000 extant species of mammal, only a handful are effectively hairless. This list includes elephants, rhinoceroses, hippopotamuses, walruses, some species of pigs, whales and other cetaceans, and naked mole rats. Most mammals have light skin that is covered by fur, and biologists believe that early human ancestors started out this way also. Dark skin probably evolved after humans lost their body fur, because the naked skin was vulnerable to the strong UV radiation as explained in the Out of Africa hypothesis. Therefore, evidence of the time when human skin darkened has been used to date the loss of human body hair, assuming that the dark skin was needed after the fur was gone.
It was expected that dating the split of the ancestral human louse into two species, the head louse and the pubic louse, would date the loss of body hair in human ancestors. However, it turned out that the human pubic louse does not descend from the ancestral human louse, but from the gorilla louse, diverging 3.3 million years ago. This suggests that humans had lost body hair (but retained head hair) and developed thick pubic hair prior to this date, were living in or close to the forest where gorillas lived, and acquired pubic lice from butchering gorillas or sleeping in their nests. The evolution of the body louse from the head louse, on the other hand, places the date of clothing much later, some 100,000 years ago.
The sweat glands in humans could have evolved to spread from the hands and feet as the body hair changed, or the hair change could have occurred to facilitate sweating. Horses and humans are two of the few animals capable of sweating on most of their body, yet horses are larger and still have fully developed fur. In humans, the skin hairs lie flat in hot conditions, as the arrector pili muscles relax, preventing heat from being trapped by a layer of still air between the hairs, and increasing heat loss by convection.
Another hypothesis for the thick body hair on humans proposes that Fisherian runaway sexual selection played a role (as well as in the selection of long head hair), (see terminal and vellus hair), as well as a much larger role of testosterone in men. Sexual selection is the only theory thus far that explains the sexual dimorphism seen in the hair patterns of men and women. On average, men have more body hair than women. Males have more terminal hair, especially on the face, chest, abdomen, and back, and females have more vellus hair, which is less visible. The halting of hair development at a juvenile stage, vellus hair, would also be consistent with the neoteny evident in humans, especially in females, and thus they could have occurred at the same time. This theory, however, has significant holdings in today's cultural norms. There is no evidence that sexual selection would proceed to such a drastic extent over a million years ago when a full, lush coat of hair would most likely indicate health and would therefore be more likely to be selected for, not against, and not all human populations today have sexual dimorphism in body hair.
A further hypothesis is that human hair was reduced in response to ectoparasites. The "ectoparasite" explanation of modern human nakedness is based on the principle that a hairless primate would harbor fewer parasites. When our ancestors adopted group-dwelling social arrangements roughly 1.8 mya, ectoparasite loads increased dramatically. Early humans became the only one of the 193 primate species to have fleas, which can be attributed to the close living arrangements of large groups of individuals. While primate species have communal sleeping arrangements, these groups are always on the move and thus are less likely to harbor ectoparasites. Because of this, selection pressure for early humans would favor decreasing body hair because those with thick coats would have more lethal-disease-carrying ectoparasites and would thereby have lower fitness.
Another view is proposed by James Giles, who attempts to explain hairlessness as evolved from the relationship between mother and child, and as a consequence of bipedalism. Giles also connects romantic love to hairlessness.
Another hypothesis is that humans' use of fire caused or initiated the reduction in human hair.
Evolutionary biologists suggest that the genus Homo arose in East Africa approximately 2.5 million years ago. They devised new hunting techniques. The higher protein diet led to the evolution of larger body and brain sizes. Jablonski postulates that increasing body size, in conjunction with intensified hunting during the day at the equator, gave rise to a greater need to rapidly expel heat. As a result, humans evolved the ability to sweat: a process which was facilitated by the loss of body hair.
Another factor in human evolution that also occurred in the prehistoric past was a preferential selection for neoteny, particularly in females. The idea that adult humans exhibit certain neotenous (juvenile) features, not evinced in the great apes, is about a century old. Louis Bolk made a long list of such traits, and Stephen Jay Gould published a short list in Ontogeny and Phylogeny. In addition, paedomorphic characteristics in women are often acknowledged as desirable by men in developed countries. For instance, vellus hair is a juvenile characteristic. However, while men develop longer, coarser, thicker, and darker terminal hair through sexual differentiation, women do not, leaving their vellus hair visible.
This article needs additional citations for verification. (August 2016) (Learn how and when to remove this template message)
Jablonski asserts head hair was evolutionarily advantageous for pre-humans to retain because it protected the scalp as they walked upright in the intense African (equatorial) UV light. While some might argue that, by this logic, humans should also express hairy shoulders because these body parts would putatively be exposed to similar conditions, the protection of the head, the seat of the brain that enabled humanity to become one of the most successful species on the planet (and which also is very vulnerable at birth) was arguably a more urgent issue (axillary hair in the underarms and groin were also retained as signs of sexual maturity). Sometime during the gradual process by which Homo erectus began a transition from furry skin to the naked skin expressed by Homo sapiens, hair texture putatively gradually changed from straight hair (the condition of most mammals, including humanity's closest cousins—chimpanzees) to Afro-textured hair or 'kinky' (i.e. tightly coiled). This argument assumes that curly hair better impedes the passage of UV light into the body relative to straight hair (thus curly or coiled hair would be particularly advantageous for light-skinned hominids living at the equator).
It is substantiated by Iyengar's findings (1998) that UV light can enter into straight human hair roots (and thus into the body through the skin) via the hair shaft. Specifically, the results of that study suggest that this phenomenon resembles the passage of light through fiber optic tubes (which do not function as effectively when kinked or sharply curved or coiled). In this sense, when hominids (i.e. Homo Erectus) were gradually losing their straight body hair and thereby exposing the initially pale skin underneath their fur to the sun, straight hair would have been an adaptive liability. By inverse logic, later, as humans traveled farther from Africa and/or the equator, straight hair may have (initially) evolved to aid the entry of UV light into the body during the transition from dark, UV-protected skin to paler skin.
Some[who?] conversely believe that tightly coiled hair that grows into a typical Afro-like formation would have greatly reduced the ability of the head and brain to cool because although African people's hair is much less dense than its European counterpart, in the intense sun the effective 'woolly hat' that such hair produced would have been a disadvantage. However, such anthropologists as Nina Jablonski oppositely argue about this hair texture. Specifically, Jablonski's assertions suggest that the adjective "woolly" in reference to Afro-hair is a misnomer in connoting the high heat insulation derivable from the true wool of sheep. Instead, the relatively sparse density of Afro-hair, combined with its springy coils actually results in an airy, almost sponge-like structure that in turn, Jablonski argues, more likely facilitates an increase in the circulation of cool air onto the scalp. Further, wet Afro-hair does not stick to the neck and scalp unless totally drenched and instead tends to retain its basic springy puffiness because it less easily responds to moisture and sweat than straight hair does. In this sense, the trait may enhance comfort levels in intense equatorial climates more than straight hair (which, on the other hand, tends to naturally fall over the ears and neck to a degree that provides slightly enhanced comfort levels in cold climates relative to tightly coiled hair).
Furthermore, some[who?] interpret the ideas of Charles Darwin as suggesting that some traits, such as hair texture, were so arbitrary to human survival that the role natural selection played was trivial. Hence, they argue in favor of his suggestion that sexual selection may be responsible for such traits. However, inclinations towards deeming hair texture "adaptively trivial" may root in certain cultural value judgments more than objective logic. In this sense the possibility that hair texture may have played an adaptively significant role cannot be completely eliminated from consideration. In fact, while the sexual selection hypothesis cannot be ruled out, the asymmetrical distribution of this trait vouches for environmental influence. Specifically, if hair texture were simply the result of adaptively arbitrary human aesthetic preferences, one would expect that the global distribution of the various hair textures would be fairly random. Instead, the distribution of Afro-hair is strongly skewed toward the equator.
Further, it is notable that the most pervasive expression of this hair texture can be found in sub-Saharan Africa; a region of the world that abundant genetic and paleo-anthropological evidence suggests, was the relatively recent (≈200,000-year-old) point of origin for modern humanity. In fact, although genetic findings (Tishkoff, 2009) suggest that sub-Saharan Africans are the most genetically diverse continental group on Earth, Afro-textured hair approaches ubiquity in this region. This points to a strong, long-term selective pressure that, in stark contrast to most other regions of the genomes of sub-Saharan groups, left little room for genetic variation at the determining loci. Such a pattern, again, does not seem to support human sexual aesthetics as being the sole or primary cause of this distribution.
The EDAR locus
A group of studies have recently shown that genetic patterns at the EDAR locus, a region of the modern human genome that contributes to hair texture variation among most individuals of East Asian descent, support the hypothesis that (East Asian) straight hair likely developed in this branch of the modern human lineage subsequent to the original expression of tightly coiled natural afro-hair. Specifically, the relevant findings indicate that the EDAR mutation coding for the predominant East Asian 'coarse' or thick, straight hair texture arose within the past ≈65,000 years, which is a time frame that covers from the earliest of the 'Out of Africa' migrations up to now.
Hair care involves the hygiene and cosmetology of hair including hair on the scalp, facial hair (beard and moustache), pubic hair and other body hair. Hair care routines differ according to an individual's culture and the physical characteristics of one's hair. Hair may be colored, trimmed, shaved, plucked, or otherwise removed with treatments such as waxing, sugaring, and threading.
Depilation is the removal of hair from the surface of the skin. This can be achieved through methods such as shaving. Epilation is the removal of the entire hair strand, including the part of the hair that has not yet left the follicle. A popular way to epilate hair is through waxing.
Shaving is accomplished with bladed instruments, such as razors. The blade is brought close to the skin and stroked over the hair in the desired area to cut the terminal hairs and leave the skin feeling smooth. Depending upon the rate of growth, one can begin to feel the hair growing back within hours of shaving. This is especially evident in men who develop a five o'clock shadow after having shaved their faces. This new growth is called stubble. Stubble typically appears to grow back thicker because the shaved hairs are blunted instead of tapered off at the end, although the hair never actually grows back thicker.
Waxing involves using a sticky wax and strip of paper or cloth to pull hair from the root. Waxing is the ideal hair removal technique to keep an area hair-free for long periods of time. It can take three to five weeks for waxed hair to begin to resurface again. Hair in areas that have been waxed consistently is known to grow back finer and thinner, especially compared to hair that has been shaved with a razor.
Laser hair removal is a cosmetic method where a small laser beam pulses selective heat on dark target matter in the area that causes hair growth without harming the skin tissue. This process is repeated several times over the course of many months to a couple of years with hair regrowing less frequently until it finally stops; this is used as a more permanent solution to waxing or shaving. Laser removal is practiced in many clinics along with many at-home products.
Cutting and trimming
Because the hair on one's head is normally longer than other types of body hair, it is cut with scissors or clippers. People with longer hair will most often use scissors to cut their hair, whereas shorter hair is maintained using a trimmer. Depending on the desired length and overall health of the hair, periods without cutting or trimming the hair can vary.
Hair has great social significance for human beings. It can grow on most external areas of the human body, except on the palms of the hands and the soles of the feet (among other areas). Hair is most noticeable on most people in a small number of areas, which are also the ones that are most commonly trimmed, plucked, or shaved. These include the face, ears, head, eyebrows, legs, and armpits, as well as the pubic region. The highly visible differences between male and female body and facial hair are a notable secondary sex characteristic.
Indication of status
Healthy hair indicates health and youth (important in evolutionary biology). Hair color and texture can be a sign of ethnic ancestry. Facial hair is a sign of puberty in men. White hair is a sign of age or genetics, which may be concealed with hair dye (not easily for some), although many prefer to assume it (especially if it is a poliosis characteristic of the person since childhood). Male pattern baldness is a sign of age, which may be concealed with a toupee, hats, or religious and cultural adornments. Although drugs and medical procedures exist for the treatment of baldness, many balding men simply shave their heads. In early modern China, the queue was a male hairstyle worn by the Manchus from central Manchuria and the Han Chinese during the Qing dynasty; hair on the front of the head was shaved off above the temples every ten days, mimicking male-pattern baldness, and the rest of the hair braided into a long pigtail.
Hairstyle may be an indicator of group membership. During the English Civil War, the followers of Oliver Cromwell decided to crop their hair close to their head, as an act of defiance to the curls and ringlets of the king's men. This led to the Parliamentary faction being nicknamed Roundheads. Recent isotopic analysis of hair is helping to shed further light on sociocultural interaction, giving information on food procurement and consumption in the 19th century. Having bobbed hair was popular among the flappers in the 1920s as a sign of rebellion against traditional roles for women. Female art students known as the "cropheads" also adopted the style, notably at the Slade School in London, England. Regional variations in hirsutism cause practices regarding hair on the arms and legs to differ. Some religious groups may follow certain rules regarding hair as part of religious observance. The rules often differ for men and women.
Many subcultures have hairstyles which may indicate an unofficial membership. Many hippies, metalheads, and Indian sadhus have long hair, as well many older indie kids. Many punks wear a hairstyle known as a mohawk or other spiked and dyed hairstyles; skinheads have short-cropped or completely shaved heads. Long stylized bangs were very common for emos, scene kids and younger indie kids in the 2000s and early 2010s, among people of both genders.
Heads were shaved in concentration camps, and head-shaving has been used as punishment, especially for women with long hair. The shaven head is common in military haircuts, while Western monks are known for the tonsure. By contrast, among some Indian holy men, the hair is worn extremely long.
In the time of Confucius (5th century BCE), the Chinese grew out their hair and often tied it, as a symbol of filial piety.
Regular hairdressing in some cultures is considered a sign of wealth or status. The dreadlocks of the Rastafari movement were despised early in the movement's history. In some cultures, having one's hair cut can symbolize a liberation from one's past, usually after a trying time in one's life. Cutting the hair also may be a sign of mourning.
Tightly coiled hair in its natural state may be worn in an Afro. This hairstyle was once worn among African Americans as a symbol of racial pride. Given that the coiled texture is the natural state of some African Americans' hair, or perceived as being more "African", this simple style is now often seen as a sign of self-acceptance and an affirmation that the beauty norms of the (eurocentric) dominant culture are not absolute. It is important to note that African Americans as a whole have a variety of hair textures, as they are not an ethnically homogeneous group, but an ad-hoc of different racial admixtures.
The film Easy Rider (1969) includes the assumption that the two main characters could have their long hairs forcibly shaved with a rusty razor when jailed, symbolizing the intolerance of some conservative groups toward members of the counterculture. At the conclusion of the Oz obscenity trials in the UK in 1971, the defendants had their heads shaved by the police, causing public outcry. During the appeal trial, they appeared in the dock wearing wigs. A case where a 14-year-old student was expelled from school in Brazil in the mid-2000s, allegedly because of his fauxhawk haircut, sparked national debate and legal action resulting in compensation.
Women's hair may be hidden using headscarves, a common part of the hijab in Islam and a symbol of modesty required for certain religious rituals in Eastern Orthodoxy. Russian Orthodox Church requires all married women to wear headscarves inside the church; this tradition is often extended to all women, regardless of marital status. Orthodox Judaism also commands the use of scarves and other head coverings for married women for modesty reasons. Certain Hindu sects also wear head scarves for religious reasons. Sikhs have an obligation not to cut hair (a Sikh cutting hair becomes 'apostate' which means fallen from religion) and men keep it tied in a bun on the head, which is then covered appropriately using a turban. Multiple religions, both ancient and contemporary, require or advise one to allow their hair to become dreadlocks, though people also wear them for fashion. For men, Islam, Orthodox Judaism, Orthodox Christianity, Roman Catholicism, and other religious groups have at various times recommended or required the covering of the head and sections of the hair of men, and some have dictates relating to the cutting of men's facial and head hair. Some Christian sects throughout history and up to modern times have also religiously proscribed the cutting of women's hair. For some Sunni madhabs, the donning of a kufi or topi is a form of sunnah.
- Chaetophobia – the fear of hair
- Hair analysis (alternative medicine)
- Hypertrichosis – the state of having an excess of hair on the head or body
- Hypotrichosis – the state of having a less than normal amount of hair on the head or body
- Seta – hair-like structures in insects
- Trichotillomania – hair pulling
- Sherrow, Victoria (2006). Encyclopedia of Hair: A Cultural History. Westport, CT: Greenwood Press. p. iv. ISBN 978-0-313-33145-9.
- Krause, K; Foitzik, K (2006). "Biology of the Hair Follicle: The Basics". Seminars in Cutaneous Medicine and Surgery. 25 (1): 2–10. doi:10.1016/j.sder.2006.01.002. PMID 16616298.
- Feughelman, Max (1997). Mechanical Properties and Structure of Alpha-keratin Fibres: Wool, Human Hair and Related Fibres. UNSW Press. ISBN 978-0-86840-359-5. Retrieved 27 January 2016.
- Hair Structure and Hair Life Cycle. follicle.com
- "Topic 2". Texascollaborative.org. Archived from the original on 15 April 2013. Retrieved 18 February 2015.
- Ley, Brian (1999). "Diameter of a Human Hair". Retrieved 28 June 2010.
- Councilman, W. T. (1913). "Ch. 1". Disease and Its Causes. United States: New York Henry Holt and Company London Williams and Norgate The University Press, Cambridge, USA.
- Freinkel, R.K.; Woodley, D.T., eds. (15 March 2001). The Biology of the Skin. CRC Press. p. 80. ISBN 9781850700067.
- Histology Guide | Skin Histology.leeds.ac.uk. Retrieved on 18 May 2016.
- "Curly Hair Gene". Bio.davidson.edu. Retrieved 28 January 2015.
- "Hair type, texture and density | Hairdressing Training". Hairdressing.ac.uk. Archived from the original on 12 February 2015. Retrieved 28 January 2015.
- Bubenik, George A. (1 September 2003). "Why do humans get "goosebumps" when they are cold, or under other circumstances?". Scientific American. Retrieved 4 May 2010.
- Dean, I.; Siva-Jothy, M. T. (2011). "Human fine body hair enhances ectoparasite detection". Biology Letters. 8 (3): 358–61. doi:10.1098/rsbl.2011.0987. PMC 3367735. PMID 22171023.
- "Neuroscience for Kids – Receptors". Faculty.washington.edu. Retrieved 18 February 2015.
- "hair biology – functions of the hair fiber and hair follicle". Keratin.com. Retrieved 18 February 2015.
- Sabah, NH (1974). "Controlled stimulation of hair follicle receptors". Journal of Applied Physiology. 36 (2): 256–7. doi:10.1152/jappl.1918.104.22.1686. PMID 4811387.
- Montagna, W. (1985). "The evolution of human skin(?)". Journal of Human Evolution. 14: 3–22. doi:10.1016/S0047-2484(85)80090-7.
- "Images of Nature". Ion.asu.edu. Archived from the original on 8 May 2006. Retrieved 18 February 2015.
- Niedźwiedzki, Grzegorz; Bojanowski, Maciej (July 2012). "A Supposed Eupelycosaur Body Impression from the Early Permian of the Intra-Sudetic Basin, Poland". Ichnos. 19 (3): 150–155. doi:10.1080/10420940.2012.702549. S2CID 129567176.
- Kardong, K.V. (2002): Vertebrates: Comparative anatomy, function, evolution. 3rd Edition. McGraw-Hill, New York
- Q. Ji; Z-X Luo; C-X Yuan; Tabrum, A. R. (February 2006). "A Swimming Mammaliaform from the Middle Jurassic and Ecomorphological Diversification of Early Mammals". Science. 311 (5764): 1123–7. Bibcode:2006Sci...311.1123J. doi:10.1126/science.1123026. PMID 16497926. S2CID 46067702. See also the news item at "Jurassic "Beaver" Found; Rewrites History of Mammals". Archived from the original on 22 September 2012. Retrieved 12 August 2012.
- "Jurassic squirrel's secret is out". The Hindu. 9 August 2013. Retrieved 29 June 2016.
- Meng, Qing-Jin; Grossnickle, David M.; Di, Liu; Zhang, Yu-Guang; Neander, April I.; Ji, Qiang; Luo, Zhe-Xi (2017). "New gliding mammaliaforms from the Jurassic". Nature. 548 (7667): 291–296. Bibcode:2017Natur.548..291M. doi:10.1038/nature23476. PMID 28792929. S2CID 205259206.
- Bajdek, Piotr (2015). "Microbiota and food residues including possible evidence of pre-mammalian hair in Upper Permian coprolites from Russia". Lethaia. 49 (4): 455–477. doi:10.1111/let.12156.
- Lingham-Soliar, Theagarten (2014). The vertebrate integument, Vol I. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 211–212. ISBN 978-3-642-53748-6.
- Rowe, T. B.; Macrini, T. E.; Luo, Z.-X. (19 May 2011). "Fossil Evidence on Origin of the Mammalian Brain". Science. 332 (6032): 955–957. Bibcode:2011Sci...332..955R. doi:10.1126/science.1203117. PMID 21596988. S2CID 940501.
- Ruben, J.A.; Jones, T.D. (2000). "Selective Factors Associated with the Origin of Fur and Feathers". Am. Zool. 40 (4): 585–596. doi:10.1093/icb/40.4.585.
- Plower, R.P. (1897). An introduction to the study of mammals living and extinct. New York: Cornell University Library. p. 11. Retrieved 8 June 2012.
Flat scutes, with the edges in apposition, and not overlaid, clothe both surfaces of the tail of the beaver, rats, and others of the same order, and also of some insectivores and marsupials.
- Teerink, BJ (2003). Hair of West European Mammals: Atlas and Identification Key. Cambridge University Press. p. 224. ISBN 9780521545778.
- Toth, Maria (29 December 2017). Hair and fur atlas of Central European mammals. Pars Ltd. p. 307. ISBN 978-963-88339-7-6. Retrieved 8 July 2019.
- Prescott, Tony; Ahissar, Ehud; Izhikevich, Eugene (21 November 2015). Scholarpedia of touch. Paris. ISBN 978-94-6239-133-8. OCLC 932171320.
- Linden, David, J. (March 2015). "Chapter 2". Touch: The Science of Hand, Heart and Mind. Viking. ISBN 978-0241184035.
- Rebora, Alfredo (2010). "Lucy's pelt: when we became hairless and how we managed to survive". International Journal of Dermatology. 49 (1): 17–20. doi:10.1111/j.1365-4632.2009.04266.x. ISSN 1365-4632. PMID 20465604. S2CID 21484729.
- Winter, H.; Langbein, L.; Krawczak, M.; Cooper, D.N.; Jave-Suarez, L.F.; Rogers, M.A.; Praetzel, S.; Heidt, P.J.; Schweizer, J. (2001). "Human type I hair keratin pseudogene phihHaA has functional orthologs in the chimpanzee and gorilla: Evidence for recent inactivation of the human gene after the Pan-Homo divergence". Human Genetics. 108 (1): 37–42. doi:10.1007/s004390000439. PMID 11214905. S2CID 21545865.
- Abbasi, A.A. (2011). "Molecular evolution of HR, a gene that regulates the postnatal cycle of the hair follicle". Scientific Reports. 1: 32. Bibcode:2011NatSR...1E..32A. doi:10.1038/srep00032. PMC 3216519. PMID 22355551.
- "Women and Hair Loss: Possible Causes". WebMD. Retrieved 22 April 2020.
- Rantala, M.J. (1999). "Human nakedness: Adaptation against ectoparasites?". International Journal for Parasitology. 29 (12): 1987–1989. doi:10.1016/S0020-7519(99)00133-2. PMID 10961855.
- Jablonski, N.G.; Chaplin, G. (2010). "Human skin pigmentation as an adaptation to UV radiation". Proceedings of the National Academy of Sciences of the United States of America. 107 (Supplement 2): 8962–8968. Bibcode:2010PNAS..107.8962J. doi:10.1073/pnas.0914628107. PMC 3024016. PMID 20445093.
- Bergman, Jerry (2014). "Why mammal body hair is an evolutionary enigma". Creation Research Society Quarterly Journal. 50 (3): 216. Archived from the original on 16 April 2016. Retrieved 18 May 2016.[unreliable source?]
- "Gorillas gave pubic lice to humans, DNA study reveals". National Geographic. 28 October 2010. Archived from the original on 16 December 2014. Retrieved 18 February 2015.
- Weiss RA (10 February 2009). "Apes, lice and prehistory". J Biol. 8 (2): 20. doi:10.1186/jbiol114. PMC 2687769. PMID 19232074.
- Kittler, R.; Kayser, M.; Stoneking, M. (2004). "Molecular Evolution of Pediculus humanus and the Origin of Clothing" (PDF). Current Biology. 14 (24): 1414–7. doi:10.1016/j.cub.2004.12.024. PMID 12932325. Retrieved 4 September 2015.
- Toups, M.A.; Kitchen, A.; Light, J.E.; Reed, D.L. (2011). "Origin of clothing lice indicates early clothing use by anatomically modern humans in Africa". Molecular Biology and Evolution. 28 (1): 29–32. doi:10.1093/molbev/msq234. PMC 3002236. PMID 20823373.
- Dixson, A.F. (2009). Sexual selection and the origins of human mating systems (1 ed.). Oxford University Press, USA. ISBN 978-0-19-955942-8.
- Pagel, Mark; Bodmer, Walter (2003). "A naked ape would have fewer parasites". Proceedings of the Royal Society B: Biological Sciences. 270 (Suppl 1): S117–S119. doi:10.1098/rsbl.2003.0041. PMC 1698033. PMID 12952654.
- Rantala, M.J. (1999). "Human nakedness: Adaptation against ectoparasites?" (PDF). International Journal for Parasitology. 29 (12): 1987–1989. doi:10.1016/S0020-7519(99)00133-2. PMID 10961855. Archived from the original (PDF) on 5 February 2011. Retrieved 14 December 2010.
- Giles, James (20 March 2015) . "Naked love: The evolution of human hairlessness". Biological Theory. 5 (4): 326–336. doi:10.1162/BIOT_a_00062. S2CID 84164968.
- Shea, Christopher (12 July 2011). "Human hairlessness: The naked love explanation". Ideas Market blog. The Wall Street Journal. Retrieved 18 February 2015.
- Couch, Alan (3 February 2016). "Fur or fire: Was the use of fire the initial selection pressure for fur loss in ancestral hominins?". PeerJ Preprints. 4: e1702v1. doi:10.7287/peerj.preprints.1702v1. Retrieved 10 February 2016.
- Jablonski, Nina G. (1 May 2008). Skin: A Natural History. University of California Press. pp. 13–. ISBN 978-0-520-94170-0. Retrieved 27 January 2016.
- Bolk, L. (1926). Das Problem der Menschwerdung (in German). Jena: Fischer.
- short-list of 25 characters reprinted in Gould, Stephen Jay (1977). Ontogeny and phylogeny. Harvard University Press. p. 357. ISBN 0674639413.
- Scott, Isabel M. (7 October 2014). "Human preferences for sexually dimorphic faces may be evolutionarily novel". Proceedings of the National Academy of Sciences of the United States of America. 111 (40): 14388–14393. Bibcode:2014PNAS..11114388S. doi:10.1073/pnas.1409643111. PMC 4210032. PMID 25246593.
- Fujimoto, A; Kimura, R; Ohashi, J; Omi, K; Yuliwulandari, R; Batubara, L; Mustofa, MS; Samakkarn, U; et al. (2008). "A scan for genetic determinants of human hair morphology: EDAR is associated with Asian hair thickness". Human Molecular Genetics. 17 (6): 835–43. doi:10.1093/hmg/ddm355. PMID 18065779.
- Fujimoto, A; Ohashi, J; Nishida, N; Miyagawa, T; Morishita, Y; Tsunoda, T; Kimura, R; Tokunaga, K (2008). "A replication study confirmed the EDAR gene to be a major contributor to population differentiation regarding head hair thickness in Asia" (PDF). Human Genetics. 124 (2): 179–85. doi:10.1007/s00439-008-0537-1. hdl:2241/103672. PMID 18704500. S2CID 20084816. Archived from the original (PDF) on 5 February 2011. Retrieved 14 December 2010.
- Mou, C; Thomason, HA; Willan, PM; Clowes, C; Harris, WE; Drew, CF; Dixon, J; Dixon, MJ; Headon, DJ (2008). "Enhanced ectodysplasin-A receptor (EDAR) signaling alters multiple fiber characteristics to produce the East Asian hair form" (PDF). Human Mutation. 29 (12): 1405–11. doi:10.1002/humu.20795. PMID 18561327. S2CID 37696013. Retrieved 30 January 2019.
- "Dermatologyinfo.net". Dermatologyinfo.net. Retrieved 21 May 2012.
- "Premature graying of hair". Retrieved 15 November 2017.
- Gupta, Ankush (27 April 2014). "Human Hair "Waste" and Its Utilization: Gaps and Possibilities". Journal of Waste Management. 2014: 1–17. doi:10.1155/2014/498018.
- Ashby, Steven P. (2016). "Archaeologies of Hair: the head and its grooming in ancient and contemporary societies". Internet Archaeology (42). doi:10.11141/ia.42.6.
- Hielscher, Sabine (2016). "Because You're Worth It: Women's daily hair care routines in contemporary Britain". Internet Archaeology (42). doi:10.11141/ia.42.6.13.
- Glenday, Craig (2010). Guinness World Records 2011. ISBN 9781904994572.
- Olmert, Michael (1996). Milton's Teeth and Ovid's Umbrella: Curiouser & Curiouser Adventures in History, p. 53. Simon & Schuster, New York. ISBN 0-684-80164-7
- Brown, Chloe; Alexander, Michelle (2016). "Hair as a Window on Diet and Health in Post-Medieval London: an isotopic analysis". Internet Archaeology (42). doi:10.11141/ia.42.6.12.
- Green, Jonathon, (1999). All Dressed Up: The Sixties and the Counterculture. London: Pimlico. ISBN 0-7126-6523-4.
- "G1 – Justiça do CE condena escola por barrar aluno com cabelo 'moicano' – notícias em Ceará" [G1 - CE court condemns school for barring student with 'mohawk' hair - news in Ceara]. G1.globo.com. 28 September 2011. Retrieved 18 February 2015.
- "G1 – Aluno diz que jogador inspirou 'corte moicano' alvo de ação judicial no CE – notícias em Ceará" [G1 says student inspired 'Mohawk court' subject to legal action in CE - news in Ceara]. G1.globo.com. 30 September 2011. Retrieved 18 February 2015.
- Dilgeer, Harjinder Singh (2005) Dictionary of Sikh Philosophy, Sikh University Press.
- The War Within Our Hearts – Page 65 Sa'ad Quadri – 2013
- Iyengar, B. (1998). "The hair follicle is a specialized UV receptor in human skin?". Bio Signals Recep. 7 (3): 188–194. doi:10.1159/000014544. PMID 9672761. S2CID 46864921.
- Jablonski, N.G. (2006). Skin: a natural history. Berkeley, CA: University of California Press.
- Rogers, Alan R.; Iltis, David; Wooding, Stephen (2004). "Genetic variation at the MC1R locus and the time since loss of human body hair". Current Anthropology. 45 (1): 105–108. doi:10.1086/381006.
- Tishkoff, S. A.; Dietzsch, E.; Speed, W.; Pakstis, A. J.; Kidd, J. R.; Cheung, K.; Bonne-Tamir, B.; Santachiara-Benerecetti, A. S.; et al. (1996). "Global patterns of linkage disequilibrium at the CD4 locus and modern human origins". Science. 271 (5254): 1380–1387. Bibcode:1996Sci...271.1380T. doi:10.1126/science.271.5254.1380. PMID 8596909. S2CID 4266475.
|Wikimedia Commons has media related to Hair.|
- Quotations related to Hair at Wikiquote
- The dictionary definition of hair at Wiktionary
- How to measure the diameter of your own hair using a laser pointer
- Instant insight outlining the chemistry of hair from the Royal Society of Chemistry
- PUIU, TIBI (23 August 2018). "How fast hair grows, and other hairy science". ZME Science. Retrieved 30 August 2018. |
Fertility is the natural capability to produce offspring. As a measure, fertility rate is the number of offspring born per mating pair, individual or population. Fertility differs from fecundity, which is defined as the potential for reproduction (influenced by gamete production, fertilization and carrying a pregnancy to term). A lack of fertility is infertility while a lack of fecundity would be called sterility.
In demographic contexts, fertility refers to the actual production of offspring, rather than the physical capability to produce which is termed fecundity. While fertility can be measured, fecundity cannot be. Demographers measure the fertility rate in a variety of ways, which can be broadly broken into "period" measures and "cohort" measures. "Period" measures refer to a cross-section of the population in one year. "Cohort" data on the other hand, follows the same people over a period of decades. Both period and cohort measures are widely used.
- Crude birth rate (CBR) - the number of live births in a given year per 1,000 people alive at the middle of that year. One disadvantage of this indicator is that it is influenced by the age structure of the population.
- General fertility rate (GFR) - the number of births in a year divided by the number of women aged 15–44, times 1000. It focuses on the potential mothers only, and takes the age distribution into account.
- Child-Woman Ratio (CWR) - the ratio of the number of children under 5 to the number of women 15–49, times 1000. It is especially useful in historical data as it does not require counting births. This measure is actually a hybrid, because it involves deaths as well as births. (That is, because of infant mortality some of the births are not included; and because of adult mortality, some of the women who gave birth are not counted either.)
- Coale's Index of Fertility - a special device used in historical research
- Total fertility rate (TFR) - the total number of children a woman would bear during her lifetime if she were to experience the prevailing age-specific fertility rates of women. TFR equals the sum for all age groups of 5 times each ASFR rate.
- Gross Reproduction Rate (GRR) - the number of girl babies a synthetic cohort will have. It assumes that all of the baby girls will grow up and live to at least age 50.
- Net Reproduction Rate (NRR) - the NRR starts with the GRR and adds the realistic assumption that some of the women will die before age 49; therefore they will not be alive to bear some of the potential babies that were counted in the GRR. NRR is always lower than GRR, but in countries where mortality is very low, almost all the baby girls grow up to be potential mothers, and the NRR is practically the same as GRR. In countries with high mortality, NRR can be as low as 70% of GRR. When NRR = 1.0, each generation of 1000 baby girls grows up and gives birth to exactly 1000 girls. When NRR is less than one, each generation is smaller than the previous one. When NRR is greater than 1 each generation is larger than the one before. NRR is a measure of the long-term future potential for growth, but it usually is different from the current population growth rate.
Social and economic determinants of fertilityEdit
A parent's number of children strongly correlates with the number of children that each person in the next generation will eventually have. Factors generally associated with increased fertility include religiosity, intention to have children, and maternal support. Factors generally associated with decreased fertility include wealth, education, female labor participation, urban residence, intelligence, increased female age and (to a lesser degree) increased male age.
The "Three-step Analysis" of the fertility process was introduced by Kingsley Davis and Judith Blake in 1956 and makes use of three proximate determinants: The economic analysis of fertility is part of household economics, a field that has grown out of the New Home Economics. Influential economic analyses of fertility include Becker (1960), Mincer (1963), and Easterlin (1969). The latter developed the Easterlin hypothesis to account for the Baby Boom.
Bongaarts' model of components of fertilityEdit
Bongaarts proposed a model where the total fertility rate of a population can be calculated from four proximate determinants and the total fecundity (TF). The index of marriage (Cm), the index of contraception (Cc), the index of induced abortion (Ca) and the index of postpartum infecundability (Ci). These indices range from 0 to 1. The higher the index, the higher it will make the TFR, for example a population where there are no induced abortions would have a Ca of 1, but a country where everybody used infallible contraception would have a Cc of 0.
TFR = TF × Cm × Ci × Ca × Cc
These four indices can also be used to calculate the total marital fertility (TMFR) and the total natural fertility (TN).
TFR = TMFR × Cm
TMFR = TN × Cc × Ca
TN = TF × Ci
- The first step is sexual intercourse, and an examination of the average age at first intercourse, the average frequency outside marriage, and the average frequency inside.
- Certain physical conditions may make it impossible for a woman to conceive. This is called "involuntary infecundity." If the woman has a condition making it possible, but unlikely to conceive, this is termed "subfecundity." Venereal diseases (especially gonorrhea, syphilis, and chlamydia) are common causes. Nutrition is a factor as well: women with less than 20% body fat may be subfecund, a factor of concern for athletes and people susceptible to anorexia. Demographer Ruth Frisch has argued that "It takes 50,000 calories to make a baby". There is also subfecundity in the weeks following childbirth, and this can be prolonged for a year or more through breastfeeding. A furious political debate raged in the 1980s over the ethics of baby food companies marketing infant formula in developing countries. A large industry has developed to deal with subfecundity in women and men. An equally large industry has emerged to provide contraceptive devices designed to prevent conception. Their effectiveness in use varies. On average, 85% of married couples using no contraception will have a pregnancy in one year. The rate drops to the 20% range when using withdrawal, vaginal sponges, or spermicides. (This assumes the partners never forget to use the contraceptive.) The rate drops to only 2 or 3% when using the pill or an IUD, and drops to near 0% for implants and 0% for tubal ligation (sterilization) of the woman, or a vasectomy for the man.
- After a fetus is conceived, it may or may not survive to birth. "Involuntary fetal mortality" involves natural abortion, miscarriages and stillbirth (a fetus born dead). Human intervention intentionally causing abortion of the fetus is called "therapeutic abortion".
Women have hormonal cycles which determine when they can achieve pregnancy. The cycle is approximately twenty-eight days long, with a fertile period of five days per cycle, but can deviate greatly from this norm. Men are fertile continuously, but their sperm quality is affected by their health, frequency of ejaculation, and environmental factors.
Fertility declines with age in both sexes. In women the decline is more rapid, with complete infertility normally occurring around the age of 50.
Pregnancy rates for sexual intercourse are highest when it is done every 1 or 2 days, or every 2 or 3 days. Studies have shown no significant difference between different sex positions and pregnancy rate, as long as it results in ejaculation into the vagina.
A woman's menstrual cycle begins, as it has been arbitrarily assigned, with menses. Next is the follicular phase where estrogen levels build as an ovum matures (due to the follicular stimulating hormone, or FSH) within the ovary. When estrogen levels peak, it spurs a surge of luteinizing hormone (LH) which finishes the ovum and enables it to break through the ovary wall. This is ovulation. During the luteal phase, which follows ovulation LH and FSH cause the post-ovulation ovary to develop into the corpus luteum which produces progesterone. The production of progesterone inhibits the LH and FSH hormones which (in a cycle without pregnancy) causes the corpus luteum to atrophy, and menses to begin the cycle again.
Peak fertility occurs during just a few days of the cycle: usually two days before and two days after the ovulation date. This fertile window varies from woman to woman, just as the ovulation date often varies from cycle to cycle for the same woman. The ovule is usually capable of being fertilized for up to 48 hours after it is released from the ovary. Sperm survive inside the uterus between 48 and 72 hours on average, with the maximum being 120 hours (5 days).
These periods and intervals are important factors for couples using the rhythm method of contraception.
The average age of menarche in the United States is about 12.5 years. In postmenarchal girls, about 80% of the cycles are anovulatory (ovulation does not actually take place) in the first year after menarche, 50% in the third and 10% in the sixth year.
Menopause occurs during a woman's midlife (between ages 48 and 55). During menopause, hormonal production by the ovaries is reduced, eventually causing a permanent cessation of the primary function of the ovaries, particularly the creation of the uterine lining (period). This is considered the end of the fertile phase of a woman's life.
- At age 30
- 75% will have a conception ending in a live birth within one year
- 91% will have a conception ending in a live birth within four years.
- At age 35
- 66% will have a conception ending in a live birth within one year
- 84% will have a conception ending in a live birth within four years.
- At age 40
- 44% will have a conception ending in a live birth within one year
- 64% will have a conception ending in a live birth within four years.
Studies of actual couples trying to conceive have come up with higher results: one 2004 study of 770 European women found that 82% of 35- to 39-year-old women conceived within a year, while another in 2013 of 2,820 Danish women saw 78% of 35- to 40-year-olds conceive within a year.
The use of fertility drugs and/or invitro fertilization can increase the chances of becoming pregnant at a later age. Successful pregnancies facilitated by fertility treatment have been documented in women as old as 67. Studies since 2004 now show that mammals may continue to produce new eggs throughout their lives, rather than being born with a finite number as previously thought. Researchers at the Massachusetts General Hospital in Boston, US, say that if eggs are newly created each month in humans as well, all current theories about the aging of the female reproductive system will have to be overhauled, although at this time this is simply conjecture.
According to the March of Dimes, "about 9 percent of recognized pregnancies for women aged 20 to 24 ended in miscarriage. The risk rose to about 20 percent at age 35 to 39, and more than 50 percent by age 42". Birth defects, especially those involving chromosome number and arrangement, also increase with the age of the mother. According to the March of Dimes, "At age 25, your risk of having a baby with Down syndrome is 1 in 1,340. At age 30, your risk is 1 in 940. At age 35, your risk is 1 in 353. At age 40, your risk is 1 in 85. At age 45, your risk is 1 in 35."
Some research suggest that increased male age is associated with a decline in semen volume, sperm motility, and sperm morphology. In studies that controlled for female age, comparisons between men under 30 and men over 50 found relative decreases in pregnancy rates between 23% and 38%. It is suggested that sperm count declines with age, with men aged 50–80 years producing sperm at an average rate of 75% compared with men aged 20–50 years and that larger differences are seen in how many of the seminiferous tubules in the testes contain mature sperm:
- In males 20–39 years old, 90% of the seminiferous tubules contain mature sperm.
- In males 40–69 years old, 50% of the seminiferous tubules contain mature sperm.
- In males 80 years old and older, 10% of the seminiferous tubules contain mature sperm.
Decline in male fertility is influenced by many factors, including lifestyle, environment and psychological factors.
Some research also suggests increased risks for health problems for children of older fathers, but no clear association has been proven. A large scale in Israel study suggested that the children of men 40 or older were 5.75 times more likely than children of men under 30 to have an autism spectrum disorder, controlling for year of birth, socioeconomic status, and maternal age. Increased paternal age is suggested by some to directly correlate to schizophrenia but it is not proven.
Australian researchers have found evidence to suggest overweight obesity may cause subtle damage to sperm and prevent a healthy pregnancy. They say fertilization was 40% less likely to succeed when the father was overweight.
The American Fertility Society recommends an age limit for sperm donors of 50 years or less, and many fertility clinics in the United Kingdom will not accept donations from men over 40 or 45 years of age.
Historical trends by countryEdit
The French pronatalist movement from 1919–1945 failed to convince French couples they had a patriotic duty to help increase their country's birthrate. Even the government was reluctant in its support to the movement. It was only between 1938 and 1939 that the French government became directly and permanently involved in the pronatalist effort. Although the birthrate started to surge in late 1941, the trend was not sustained. Falling birthrate once again became a major concern among demographers and government officials beginning in the 1970s.
From 1800 to 1940, fertility fell in the US. There was a marked decline in fertility in the early 1900s, associated with improved contraceptives, greater access to contraceptives and sexuality information and the "first" sexual revolution.
After 1940 fertility suddenly started going up again, reaching a new peak in 1957. After 1960, fertility started declining rapidly. In the Baby Boom years (1946–1964), women married earlier and had their babies sooner; the number of children born to mothers after age 35 did not increase.
After 1960, new methods of contraception became available, ideal family size fell, from 3 to 2 children. Couples postponed marriage and first births, and they sharply reduced the number of third and fourth births.
Infertility primarily refers to the biological inability of a person to contribute to conception. Infertility may also refer to the state of a woman who is unable to carry a pregnancy to full term. There are many biological causes of infertility, including some that medical intervention can treat.
- Birth control
- Family economics
- Family planning
- Fertility clinic
- Fertility tourism
- Fertility deity
- Fertility preservation
- Human Fertilisation and Embryology Authority
- Natural fertility
- Reproductive health
- Sub-replacement fertility
- Total fertility rate
- Fertility-development controversy
- Fertility factor (demography)
- "The demography of fertility and infertility". www.gfmer.ch.
- For detailed discussions of each measure see Paul George Demeny and Geoffrey McNicoll, Encyclopedia of Population (2003)
- Another way of doing it is to add up the ASFR for age 10-14, 15-19, 20-24, etc., and multiply by 5 (to cover the 5 year interval).
- Murphy, Michael (2013). "Cross-National Patterns of Intergenerational Continuities in Childbearing in Developed Countries". Biodemography and Social Biology. 59 (2): 101–126. doi:10.1080/19485565.2013.833779. ISSN 1948-5565.
- Hayford, S. R.; Morgan, S. P. (2008). "Religiosity and Fertility in the United States: The Role of Fertility Intentions". Social Forces. 86 (3): 1163–1188. doi:10.1353/sof.0.0000. PMC 2723861.
- Lars Dommermuth; Jane Klobas; Trude Lappegård (2014). "Differences in childbearing by time frame of fertility intention. A study using survey and register data from Norway". Part of the research project Family Dynamics, Fertility Choices and Family Policy (FAMDYN)
- Schaffnit, S. B.; Sear, R. (2014). "Wealth modifies relationships between kin and women's fertility in high-income countries". Behavioral Ecology. 25 (4): 834–842. doi:10.1093/beheco/aru059. ISSN 1045-2249.
- Rai, Piyush Kant; Pareek, Sarla; Joshi, Hemlata (2013). "Regression Analysis of Collinear Data using r-k Class Estimator: Socio-Economic and Demographic Factors Affecting the Total Fertility Rate (TFR) in India" (PDF). Journal of Data Science. 11.
- Bloom, David; Canning, David; Fink, Günther; Finlay, Jocelyn (2009). "Fertility, female labor force participation, and the demographic dividend". Journal of Economic Growth. 14 (2): 79–101. doi:10.1007/s10887-009-9039-9.
- Sato, Yasuhiro (30 July 2006), "Economic geography, fertility and migration" (PDF), Journal of Urban Economics, retrieved 31 March 2008
- Bongaarts, John (1978). "A Framework for Analyzing the Proximate Determinants of Fertility". Population and Development Review. 4 (1): 105–132. doi:10.2307/1972149. JSTOR 1972149.
- Stover, John (1998). "Revising the Proximate Determinants of Fertility Framework: What Have We Learned in the past 20 Years?". Studies in Family Planning. 29 (3): 255–267. doi:10.2307/172272. JSTOR 172272.
- Becker, Gary S. 1960. "An Economic Analysis of Fertility." In National Bureau Committee for Economic Research, Demographic and Economic Change in Developed Countries, a Conference of the Universities. Princeton, N.J.: Princeton University Press
- Mincer, Jacob. 1963. "Market Prices, Opportunity Costs, and Income Effects," in C. Christ (ed.) Measurement in Economics. Stanford, CA: Stanford University Press
- Easterlin, Richard A. (1975). "An Economic Framework for Fertility Analysis". Studies in Family Planning. 6 (3): 54–63. doi:10.2307/1964934. JSTOR 1964934. PMID 1118873.
- "How to get pregnant". Mayo Clinic. 2016-11-02. Retrieved 2018-02-16.
- "Fertility problems: assessment and treatment, Clinical guideline [CG156]". National Institute for Health and Care Excellence. Retrieved 2018-02-16. Published date: February 2013. Last updated: September 2017
- Dr. Philip B. Imler & David Wilbanks. "The Essential Guide to Getting Pregnant" (PDF). American Pregnancy Association.
- Dunson, D.B.; Baird, D.D.; Wilcox, A.J.; Weinberg, C.R. (1999). "Day-specific probabilities of clinical pregnancy based on two studies with imperfect measures of ovulation". Human Reproduction. 14 (7): 1835–1839. doi:10.1093/humrep/14.7.1835. ISSN 1460-2350.
- "Archived copy". Archived from the original on 2008-12-21. Retrieved 2008-09-22.
- Creinin, Mitchell D.; Keverline, Sharon; Meyn, Leslie A. (2004). "How regular is regular? An analysis of menstrual cycle regularity". Contraception. 70 (4): 289–92. doi:10.1016/j.contraception.2004.04.012. PMID 15451332.
- Anderson, S. E.; Dallal, G. E.; Must, A. (2003). "Relative Weight and Race Influence Average Age at Menarche: Results From Two Nationally Representative Surveys of US Girls Studied 25 Years Apart". Pediatrics. 111 (4 Pt 1): 844–50. doi:10.1542/peds.111.4.844. PMID 12671122.
- Apter D (February 1980). "Serum steroids and pituitary hormones in female puberty: a partly longitudinal study". Clin. Endocrinol. 12 (2): 107–20. doi:10.1111/j.1365-2265.1980.tb02125.x. PMID 6249519.
- Apter, D (1980). "Serum steroids and pituitary hormones in female puberty: a partly longitudinal study". Clinical Endocrinology. 12 (2): 107–20. doi:10.1111/j.1365-2265.1980.tb02125.x. PMID 6249519.
- Takahashi, TA; Johnson, KM (May 2015). "Menopause". The Medical clinics of North America. 99 (3): 521–34. doi:10.1016/j.mcna.2015.01.006. PMID 25841598.
- Bourgeois, F. John; Gehrig, Paola A.; Veljovich, Daniel S. (1 January 2005). "Obstetrics and Gynecology Recall". Lippincott Williams & Wilkins – via Google Books.
- A computer simulation run by Henri Leridon, PhD, an epidemiologist with the French Institute of Health and Medical Research:
- Dunson, David B.; Baird, Donna D.; Colombo, Bernardo (2004). "Increased Infertility With Age in Men and Women". Obstetrics & Gynecology. 103 (1): 51–6. doi:10.1097/01.AOG.0000100153.24061.45. PMID 14704244.
- Rothman, Kenneth J.; Wise, Lauren A.; Sørensen, Henrik T.; Riis, Anders H.; Mikkelsen, Ellen M.; Hatch, Elizabeth E. (2013). "Volitional determinants and age-related decline in fecundability: a general population prospective cohort study in Denmark". Fertility and Sterility. 99 (7): 1958–64. doi:10.1016/j.fertnstert.2013.02.040. PMC 3672329. PMID 23517858.
- Fertility Nutraceuticals, LLC "How to improve IVF success rates with smart fertility supplement strategy' May 6, 2014[unreliable medical source?]
- "Spanish woman ' is oldest mother'". BBC News. 2006-12-30. Retrieved 2006-12-30.
- Couzin, Jennifer (2004). "Reproductive Biology: Textbook Rewrite? Adult Mammals May Produce Eggs After All". Science. 303 (5664): 1593. doi:10.1126/science.303.5664.1593a. PMID 15016968.
- Wallace, WH; Kelsey, TW (2010). "Human Ovarian Reserve from Conception to the Menopause". PLoS ONE. 5 (1): e8772. doi:10.1371/journal.pone.0008772. PMC 2811725. PMID 20111701.
- "Pregnancy After 35". March of Dimes. Retrieved October 30, 2014.
- "Down syndrome".
- Kidd, Sharon A; Eskenazi, Brenda; Wyrobek, Andrew J (2001). "Effects of male age on semen quality and fertility: a review of the literature". Fertility and Sterility. 75 (2): 237–48. doi:10.1016/S0015-0282(00)01679-4. PMID 11172821.
- Effect of Age on Male Fertility Seminars in Reproductive Endocrinology. Volume, Number 3, August 1991. Sherman J. Silber, M.D.
- Campagne, Daniel M. (2013). "Can Male Fertility Be Improved Prior to Assisted Reproduction through The Control of Uncommonly Considered Factors?". International Journal of Fertility & Sterility. 6 (4): 214–23. PMC 3850314. PMID 24520443.
- Wiener-Megnazi, Zofnat; Auslender, Ron; Dirnfeld, Martha (1 January 2012). "Advanced paternal age and reproductive outcome". Asian J Androl. 14 (1): 69–76. doi:10.1038/aja.2011.69. PMC 3735149. PMID 22157982.
- Reichenberg, Abraham; Gross, Raz; Weiser, Mark; Bresnahan, Michealine; Silverman, Jeremy; Harlap, Susan; Rabinowitz, Jonathan; Shulman, Cory; Malaspina, Dolores; Lubin, Gad; Knobler, Haim Y.; Davidson, Michael; Susser, Ezra (2006). "Advancing Paternal Age and Autism". Archives of General Psychiatry. 63 (9): 1026–32. doi:10.1001/archpsyc.63.9.1026. PMID 16953005.
- Jaffe, AE; Eaton, WW; Straub, RE; Marenco, S; Weinberger, DR (1 March 2014). "Paternal age, de novo mutations and schizophrenia". Mol Psychiatry. 19 (3): 274–275. doi:10.1038/mp.2013.76. PMC 3929531. PMID 23752248.
- Schulz, S. Charles; Green, Michael F.; Nelson, Katharine J. (1 April 2016). "Schizophrenia and Psychotic Spectrum Disorders". Oxford University Press – via Google Books.
- Malaspina, Dolores; Harlap, Susan; Fennig, Shmuel; Heiman, Dov; Nahon, Daniella; Feldman, Dina; Susser, Ezra S. (2001). "Advancing Paternal Age and the Risk of Schizophrenia". Archives of General Psychiatry. 58 (4): 361–7. doi:10.1001/archpsyc.58.4.361. PMID 11296097.
- Sipos, Attila; Rasmussen, Finn; Harrison, Glynn; Tynelius, Per; Lewis, Glyn; Leon, David A; Gunnell, David (2004). "Paternal age and schizophrenia: a population based cohort study". BMJ. 329 (7474): 1070. doi:10.1136/bmj.38243.672396.55. PMC 526116. PMID 15501901.
- Malaspina, Dolores; Corcoran, Cheryl; Fahim, Cherine; Berman, Ariela; Harkavy-Friedman, Jill; Yale, Scott; Goetz, Deborah; Goetz, Raymond; Harlap, Susan; Gorman, Jack (2002). "Paternal age and sporadic schizophrenia: Evidence for de novo mutations". American Journal of Medical Genetics. 114 (3): 299–303. doi:10.1002/ajmg.1701. PMC 2982144. PMID 11920852.
- "Obesity | Fat men linked to low fertility". Sydney Morning Herald. 18 October 2010. Retrieved 19 October 2010.
- Plas, E; Berger, P; Hermann, M; Pflüger, H (2000). "Effects of aging on male fertility?". Experimental Gerontology. 35 (5): 543–51. doi:10.1016/S0531-5565(00)00120-0. PMID 10978677.
- Age Limit of Sperm Donors in the United Kingdom Pdf file Archived October 3, 2008, at the Wayback Machine.
- Reggiani, Andrés Horacio (Spring 1996). "Procreating France: The Politics of Demography, 1919-1945". French Historical Studies. 19 (3): 725–54. doi:10.2307/286642. JSTOR 286642.
- CDC Bottom of this page https://www.cdc.gov/nchs/products/vsus.htm "Vital Statistics of the United States, 2003, Volume I, Natality", Table 1-1 "Live births, birth rates, and fertility rates, by race: United States, 1909-2003."
- Makar, Robert S.; Toth, Thomas L. (2002). "The Evaluation of Infertility". American Journal of Clinical Pathology. 117 Suppl: S95–103. doi:10.1309/w8lj-k377-dhra-cp0b. PMID 14569805. Archived from the original on 2017-02-13.
This article incorporates material from the Citizendium article "Fertility (demography)", which is licensed under the Creative Commons Attribution-ShareAlike 3.0 Unported License but not under the GFDL.
- Barrett, Richard E., Donald J. Bogue, and Douglas L. Anderton. The Population of the United States 3rd Edition (1997) compendium of data
- Campagne, Daniel M (2013). "Can Male Fertility Be Improved Prior to Assisted Reproduction through The Control of Uncommonly Considered Factors?". International Journal of Fertility & Sterility. 6 (4): 214–23. PMC 3850314. PMID 24520443.
- Coale, Ansley J. and Susan C. Watkins, eds. The Decline of Fertility in Europe, (1986)
- Eversley, D. E. C. Social Theories of Fertility and the Malthusian Debate (1959) online edition
- Garrett, Eilidh ety al. Family Size in England and Wales: Place, Class, and Demography, 1891-1911(2001) online edition
- Grabill, Wilson H.. Clyde V. Kiser, Pascal K. Whelpton. The Fertility of American Women (1958), influential study at the peak of the Baby Boom online edition
- GuzmÁn, JosÉ Miguel et al. The Fertility Transition in Latin America (1996) online edition
- Haines, Michael R. and Richard H. Steckel (eds.), A Population History of North America. Cambridge University Press, 2000, 752 pp. advanced scholarship
- Hawes, Joseph M. and Elizabeth I. Nybakken, eds. American Families: a Research Guide and Historical Handbook. (Greenwood Press, 1991)
- Klein, Herbert S. A Population History of the United States. Cambridge University Press, 2004. 316 pp
- Knox, P. L. et al. The United States: A Contemporary Human Geography. Longman, 1988. 287 pp.
- Kohler, Hans-Peter Fertility and Social Interaction: An Economic Perspective (2001) online edition
- Leete, Richard. Dynamics of Values in Fertility Change (1999) online edition
- Lovett, Laura L. Conceiving the Future: Pronatalism, Reproduction, and the Family in the United States, 1890–1938, (2007) 236 pages;
- Mintz Steven and Susan Kellogg. Domestic Revolutions: a Social History of American Family Life. (1988)
- Pampel, Fred C. and H. Elizabeth Peters, "The Easterlin Effect," Annual Review of Sociology (1995) v21 pp 163–194]
- Population Reference Bureau, Population Handbook (5th ed. 2004) online (5th ed. 2004).
- Reed, James. From Private Vice to Public Virtue: The Birth Control Movement and American Society Since 1830. 1978.
- Tarver, James D. The Demography of Africa (1996) online edition
- Weeks, John R. Population: An Introduction to Concepts and Issues (10th ed. 2007), standard textbook
- Demography — Scope and links to issue contents & abstracts.
- Journal of Population Economics
- Population and Development Review — Aims and abstract & supplement links.
- Population Bulletin — Each issue on a current population topic.
- Population Studies —Aims and scope.
- Review of Economics of the Household
- Josef Ehmer, Jens Ehrhardt, Martin Kohli (Eds.): Fertility in the History of the 20th Century: Trends, Theories, Policies, Discourses. Historical Social Research 36 (2), 2011.
- Fertility treatment and clinics in the UK - HFEA
- Fertility information and advice in the UK - Fertility Road
- Jorge Chavarro (2009) The Fertility Diet: Groundbreaking Research Reveals Natural Ways to Boost Ovulation and Improve Your Chances of Getting Pregnant, McGraw-Hill Professional. ISBN 978-0-07-162710-8
- Bock J (2002). "Introduction: evolutionary theory and the search for a unified theory of fertility" (PDF). Am. J. Hum. Biol. 14 (2): 145–8. doi:10.1002/ajhb.10039. PMID 11891930.
- Jones C (March 2008). "Ethical and legal conundrums of postmodern procreation". Int J Gynaecol Obstet. 100 (3): 208–10. doi:10.1016/j.ijgo.2007.09.031. PMID 18062970.
- United Nations World Population Prospects, the 2008 Revision, Data on fertility trends worldwide |
Skip to 0 minutes and 9 secondsWe've given ourselves a challenge, see how far a miniature wheeled vehicle can go on the energy from a single AAA battery. We'll build our vehicle from commonly available items. We have an electric motor that we salvaged from an old toy, it will need a transmission system that will produce the right to rotational speed of the wheels. The motor came with a pulley so a belt drive might be the way to go. Here's a diagram of the belt drive, it has a small pulley and a large pulley, and a tensioned belt connecting the two. Tension is needed to stop the belt slipping on the pulleys. We'll use a rubber band in our belt drive.
Skip to 0 minutes and 56 secondsWe want to know how much tension the belt will need. The tension will load the bearings and increase friction, and, if we increase friction, we'll increase the losses so the vehicle won't go so far. So that's our task. Find the tension in the belt that will enable us to transmit the required torque but not generate too much friction in the bearings. Here's the geometry. To get a good range of a vehicle, we need it to be light so we kept it small. We started with a wheel diameter of 30 millimetres, we found the effective diameter of the pulley on the motor was four millimetres. For our prototype, we decided to try the biggest pulley we could fit on the axle.
Skip to 1 minute and 51 secondsThis would give us the maximum reduction ratio. The diameter of 25 millimetres seemed a good start. Later, we'll need to find the reduction ratio but for now we'll just find the tension we need in the belt. What torque do we need from the belt drive to make the car move? Pause the video and draw a free-body diagram that will show the following. 1, the traction force at the drive wheels, what pushes the car along. Assume it travels from left to right. The gravity load from my car onto the axle.
Skip to 2 minutes and 37 seconds3, the horizontal force on the axle. 4, the normal force at the road. 5, the torque that the belt drive will apply to axle.
Skip to 3 minutes and 3 secondsHere is the answer. How did you go? Now we'll find the required torque. Pause the video and write an equilibrium equation that you can use to find the value of the torque. We know that the wheel diameter is 30 millimetres. By a separate calculation, we estimated the force we will need at the driver wheel is to be 25 millinewtons. A millinewton is a thousandth of a newton. So with this information we can find the numerical value of the required torque.
Skip to 3 minutes and 53 secondsHere's the answer. We calculate the required torque, C, as 0.375 millinewton metres. A millinewton metre is a thousandth of a newton metre. That's the beginning. Now we'll pause and work out our strategy. We can use the rope around a bollard analysis to find the maximum ratio of the tensions around each pulley. We can use moment equilibrium to find the difference in tensions on either side of each pulley. Solving will give us values for the two tensions from which we can find the additional forces on the bearing. Here's our belt drive. From the equation for tension ratio of a rope around a bollard, can you tell which pulley will slip first? Pause the video and think about it.
Skip to 5 minutes and 5 secondsHere is the answer. The small pulley will slip first. Assuming that the coefficient of friction is the same for each pulley. The reason is that the ratio of tensions is the same but the angle of wrap is smaller for this smaller pulley. We need the angle of wrap for the small pulley, it's just geometry. It is given by this equation. The distance C is the distance between the centres of the two pulleys. For our prototype, we'll start with C equals 20 millimetres and this gives the angle of wrap for the small pulley as 117 degrees. Now we can find the maximum available tension ratio. It's given by T2 over T1 equals e to the mu theta.
Skip to 6 minutes and 1 secondOur beautiful equation for friction around a bollard.
Skip to 6 minutes and 7 secondsWe have just found theta, so we'll need to estimate mu. We'll use a value of 0.3. Putting this into the equation gives us the tension ratio T2 over T1 equals 1.8. Now we need to find the required tension difference. To get this, we'll take moments about the drive axle. Pause the video and draw an FBD of the large pulley that will show 1, the two tensions. 2, the reaction from the shaft on the pulley that balances the two tensions. 3, the torque from the shaft on the large pulley. We'll specify the T2 is greater than T1.
Skip to 7 minutes and 12 secondsHere is the FBD you need. Did you get it? Now we'll apply equilibrium and find the difference in the tensions. Pause and have a go at it yourself if you want to.
Skip to 7 minutes and 33 secondsTo get the tension difference, we'll take moments about the drive axle. We've already calculated the required torque, that was 0.375 newton metres, but let's make an allowance, say double it. To find the tension difference, we will use a free-body diagram. It will help us avoid errors. If we sum moments about the centre of the pulley equals naught, we get these equations. And, by manipulating the equations, we will get the tension difference T2 minus T1 equals 0.06 newtons. We have the tension difference and we have the maximum tension ratio. We're almost there. We can eliminate T2 from these equations, and we'll find that T1 equals 0.071 newtons, and then we can find a T2 equals 0.131 newtons.
Skip to 8 minutes and 49 secondsNow to find the extra load on the bearing. We'll find the rectangular components of each of the belt forces and then combine them into a resultant. First the x-direction. The sum of the belt force components in the x-direction is given by this equation. If we substitute the numbers that we have for our model car, we will find that FBx equals 0.172 newtons. We'll do the same in the y-direction.
Skip to 9 minutes and 27 secondsWe'll get this equation, and if we substitute the numbers, we'll find FBy equals 0.0536 newtons. Combining these two quantities, we find the total force on the bearings is 0.180 newtons, or we could call that 118 millinewtons. Is the bearing friction going to have too much effect? We can get some idea by comparing the extra load from the belt drive to the weight of the vehicle.
Skip to 10 minutes and 8 secondsThe estimate for the vehicle mass is 38 grams, so the bearing load from the vehicle mass is 9.8 times 38 millinewtons, we get 372 millinewtons. The extra load from the belt drive is 180 millinewtons. It's about 48% of the total load from the vehicle weight.
Skip to 10 minutes and 42 secondsIt will be a noticeable extra load and there will be load on the motor bearing too. Perhaps we should look at alternatives to our belt drive. What do you think? And this is just the start.
Designing a flat belt drive
We’ll give you all the guidance you’ll need for designing this engineering component.
The video leads you through the process with opportunities to do calculations yourself involving belt tensions, rope around a bollard, traction forces and efficiency. In the end you will find the required tension and determine the implications for efficiency of transmission.
It might help if you download the design specification in the Downloads section below in case you want to refer to it as you go.
If you just watch the video it will take about 11 minutes. If you take the opportunity to do calculations it will take longer; it’s hard to say how much longer because it depends on so many factors, but allow a total of 40 minutes.
If you are stuck (or even if you aren’t) you might like to look at the worked solution that is available from the Downloads section.
- How did this work for you?
- Did you like using the equation for ratio of tensions around a pulley? |
CCSS.Math.Content.1.MD.C.4 - Organize, represent, and interpret data with up to three categories; ask and answer questions about the total number of data points, how many in each category, and how many more or less are in one category than in another.
Authors: National Governors Association Center for Best Practices, Council of Chief State School Officers
Title: CCSS.Math.Content.1.MD.C.4 Organize, Represent, And Interpret Data With Up To Three... Measurement and Data - 1st Grade Mathematics Common Core State Standards
Publisher: National Governors Association Center for Best Practices, Council of Chief State School Officers, Washington D.C.
Copyright Date: 2010
(Page last edited 12/03/2015)
- Counting Objects and Graphing - Count the animals and graph them by coloring the correct number of squares.
- Create your own bar graph - Create your own bar graph; good for whole class activity.
- Fruit Fall - Catch the individual fruits, then graph how many you caught: Move the farmer back and forth to catch the falling fruit. When all the fruit has fallen you will see the fruit you caught represented in a picture graph. Use the picture graph to answer the "How Many" question to move to the next level.
- Grapher - Create your own graph;Interactive column graph maker. Students can change values and labels.
- Interpret Bar Graph - Online quiz; read the graph and answer the questions about it.
- Olivia Octagon - Explore graphing using these animals; compare the results.
- Play Ball - Interpret the data of these baseball teams.
- Use Graphs to Answer Questions - This Saxon math site uses pictographs or tally marks in the form of check marks. Graphs are sometimes repeated with new questions. Be careful! (whole class activity to be used with projection)
- Use Graphs to Answer Questions - Use this bar graph to answer the questions.
- Which Bar Graph is Correct? - Online quiz, read the problem, then examine the graphs to select the one that represents the problem.
- Worksheet to print? for pictograph pracice. - Worksheet; create your graph, then answer questions about your data. |
Direct Detection Of Gravitational Waves
Two black holes circle each other in a gravitational dance. Spiraling closer over thousands of years, they eventually get so close that they can no longer keep dancing. In a fraction of a second these two black holes merged into a single, larger black hole. It’s an event that happens fairly regularly throughout the universe. But this time, a group of humans 1.3 billion light years away measured the ripples in space and time produced during the merger.
It’s hard to overstate the significance of our first direct detection of gravitational waves. On the one hand the discovery announced in Physical Review Letters confirmed what we’ve suspected for decades: gravitational waves exist. By itself that’s not a big deal, since they are a natural result of general relativity, and we’ve had indirect evidence of gravitational waves since the 1970s. The direct detection of gravitational waves is yet another confirmation of what we’ve already known. On the other hand, this opens up an entirely new window to the universe.
The paper released today has been peer reviewed, which is comforting given the BICEP2 incident. It’s also a remarkably strong result given the extreme sensitivity necessary to detect gravitational waves. The advanced LIGO experiment consists of two detectors located in Louisiana and Washington. To qualify as a real detection, there must be a nearly simultaneous event in both detectors with the same basic form. In the above image, the event in question matches up quite well. It also matches the expected signal as calculated from numerical simulations of merging black holes. This is a strong, clear signal confirming gravitational waves.
The data is good enough that we actually know quite a bit about the merging black holes. The larger black hole had a mass of about 36 Suns, while the smaller one had a mass of about 29 Suns. When the two black holes merged they formed a single black hole of about 62 Suns. You might notice those numbers don’t add up. That’s because in the process of merging, about 3 solar masses worth of energy was radiated away as gravitational waves. That’s a huge amount of energy to release in a fraction of a second, which is why we can detect it so clearly from more than a billion light years away. We also know some broader characteristics, such as how fast the final black hole rotates, roughly where in the sky the merger occurred and the cosmological redshift of the event (which is how we know its distance).
While the detection of gravitational waves is the biggest news, this is also further confirmation that black holes are real. If they weren’t black holes their merger would create a burst of light or neutrinos like a stupendous supernova, which wasn’t seen. This is also the first clear observation of a black hole merger.
In the history of human civilization humans looked up at the sky and saw light. When Galileo first raised his telescope to the sky he saw light. Over the centuries we’ve widened the range of what light we can observe. We’ve launched telescopes into space to see types of light not visible from the ground. With the exception of some neutrinos and cosmic particles, the field of astronomy is rooted in our ability to observe and analyze light.
But now we can listen to the very fabric of space and time.
Paper: B. P. Abbott et al. (LIGO Scientific Collaboration and Virgo Collaboration).Observation of Gravitational Waves from a Binary Black Hole Merger. Phys. Rev. Lett. 116, 061102 (2016) |
Can you make matrices which will fix one lucky vector and crush another to zero?
Explore how matrices can fix vectors and vector directions.
Explore the properties of matrix transformations with these 10 stimulating questions.
Explore the shape of a square after it is transformed by the action of a matrix.
Explore the meaning behind the algebra and geometry of matrices with these 10 individual problems.
Explore the transformations and comment on what you find.
This problem in geometry has been solved in no less than EIGHT ways by a pair of students. How would you solve it? How many of their solutions can you follow? How are they the same or different?. . . .
Follow hints using a little coordinate geometry, plane geometry and trig to see how matrices are used to work on transformations of the plane.
Investigate matrix models for complex numbers and quaternions.
Given probabilities of taking paths in a graph from each node, use matrix multiplication to find the probability of going from one vertex to another in 2 stages, or 3, or 4 or even 100.
Play countdown with matrices
An iterative method for finding the value of the Golden Ratio with explanations of how this involves the ratios of Fibonacci numbers and continued fractions. |
by Sara Donaldson, Ed.D., Wheaton College, Norton, MA
In their series of articles about rules that expire, Karp, Bush, and Dougherty (2014, 2015, 2017) discuss the negative impact many provided hints and short-cuts have on students’ future mathematics success and confidence. Many of these “rules” do not hold true as students move into more complex topics (e.g.; just adding a zero to the end of a number when multiplying by ten no longer works when you are working with decimals). And even when the “rules” do hold true, students’ reliance on them does not support their understanding of the patterns and structures that make math work and that help them develop the sense making and reasoning needed for long term success.
Rule number two of their original 13 Rules That Expire (Karp, Bush, & Dougherty, 2014) article is “Use key words to solve word problems” (p. 21). The authors explain that although key words can be helpful, when students are encouraged to scan problems for key words and numbers instead of first making sense of the overall problem situation, the everyday and multiple meanings of key words can lead to wrong answers and an inability to determine whether an answer makes sense (e.g.; left might indicate subtraction, but it could also just be identifying handedness). One strategy for helping students develop the ability to make sense of problems, instead of relying on short cuts, is to have them practice sorting problems based on the type of operation they would use to solve the problem. This task shifts the focus from solving problems to making sense of problems (as finding the answer is not part of the work) and allows students to determine the characteristics of problems that require adding versus subtracting on their own, thus helping to promote understanding and confidence.
Here is how I approach this type of lesson:
Preparation: Gather 10-15 one step word problems with approximately half of them for each operation (e.g.; half addition and half subtraction). You can make up problems, adapt problems from your curricular materials, or simply use problems from your math text or workbook. I try to choose problems with familiar contexts and easy numbers so students can focus on the structure of the problem without being overwhelmed by other details. Depending upon your grade level the problems could involve addition and subtraction of single-digit whole numbers or multiplication and division of fractions, making it easily adaptable for different parts of the curriculum.
Once you’ve chosen your problems, print them out and cut them apart. Each group or pair of students will need a complete set of problems. I put them into envelopes for easy distribution.
Step 1: Display two problems (one for each operation). Using a think-pair-share structure, have students determine what operation would be used to solve each problem. Then lead a short discussion around how students made their decisions. If students bring up “key words” as their strategy, push them to talk about what the word means in the problem and what the word indicates is happening in the problem situation.
Step 2: Explain to students that today they will be working in pairs/small groups to sort problems into two sets based on the operation needed to solve the problem. Emphasize that they will not be solving the problems, just sorting them. I usually encourage students to see each problem as a mini-story and to use what they picture happening in the story to help them determine the operation, just like they work to create a movie in their head to help them understand texts when they are reading.
Step 3: Group students and distribute the problem sets. I like to have students work with at least one other student on this task so they need to talk through decisions, however you could also start by having students complete the sorting independently before moving to step 4 where they will compare their sort with another student.
Step 4: Once students have sorted their problems have them work with another pair/group to compare their sorts. For any problems where they disagree on the operation, ask students to discuss their thinking and work to reach consensus. If a group is unable to come to agreement on a problem, ask them to put it aside so we can discuss it as a whole class during the debrief (Step 5).
Step 5: After groups have had time to talk through their sorts, bring the class together to debrief their thinking, talk through any problems which caused disagreement, and come up with some guidelines for determining the underlying characteristics of problems requiring each operation. With the problems displayed for all to see, have students make “We noticed that…” statements that generalize the patterns found for problems using each operation. For example, “We noticed that for subtraction we were finding the difference between two groups but for addition we were putting groups together.” Recording these generalization statements on anchor charts, along with representative problems, will serve as a good future resource for students that will promote their sense making and reasoning ability, as well as their independent problem solving ability.
Some groups of students are very adept at generalizing patterns, while others are not. Being prepared with questions such as, “Is that always true?” and “How is that different than in the (opposite operation) problems?” can be helpful. Additionally, pulling out a few problems that have similar structures and asking students, “What do you notice is the same about these problems?” is also a helpful scaffold.
Extending the lesson. In addition to sorting problems based on inverse operations, students can also sort problems that use the same operation, but which have different underlying structures. Connecting this type of sort to solution strategies helps students develop fluency as they come to recognize entry points for different types of problems and thus are better able to pull forward prior experiences as they tackle unfamiliar problems. The “Mathematics Glossary” of the Common Core State Standards for Mathematics identifies common addition and subtraction problem situations (Table 1) and multiplication and division problem situations (Table 2). Although students are not expected to be able to name these different problem structures, becoming familiar with them and recognizing that each type requires a slightly different approach will empower students and allow them to carry these strategies forward to similarly structured problems using more complex numbers in future years, as this knowledge and understanding will never “expire”, unlike the rules upon which many students currently rely. |
Understanding multiplication soon after counting, addition, as well as subtraction is perfect. Children discover arithmetic via a natural progression. This progress of understanding arithmetic is often the adhering to: counting, addition, subtraction, multiplication, and finally division. This document brings about the concern why understand arithmetic with this sequence? More importantly, why discover multiplication following counting, addition, and subtraction just before section?
Multiplication Facts Worksheets From The Teacher Guide And Uploaded by admin on Friday, November 13th, 2020 in category Chart.
See also The Multiplying (1 To 10)2 (A) Math Worksheet From The from Chart Topic.
Here we have another worksheet Multiplication Timed Test Printable 0 12 – Fill Online featured under Multiplication Facts Worksheets From The Teacher Guide And. We hope you enjoyed it and if you want to download the worksheets in high quality, simply right click the worksheets file and choose "Save As". Thanks for reading Multiplication Facts Worksheets From The Teacher Guide And. |
|Nucleus · Nucleons (p, n) · Nuclear matter · Nuclear force · Nuclear structure · Nuclear reaction|
In nuclear science, the decay chain refers to a series of radioactive decays of different radioactive decay products as a sequential series of transformations. It is also known as a "radioactive cascade". Most radioisotopes do not decay directly to a stable state, but rather undergo a series of decays until eventually a stable isotope is reached.
Decay stages are referred to by their relationship to previous or subsequent stages. A parent isotope is one that undergoes decay to form a daughter isotope. One example of this is uranium (atomic number 92) decaying into thorium (atomic number 90). The daughter isotope may be stable or it may decay to form a daughter isotope of its own. The daughter of a daughter isotope is sometimes called a granddaughter isotope.
The time it takes for a single parent atom to decay to an atom of its daughter isotope can vary widely, not only between different parent-daughter pairs, but also randomly between identical pairings of parent and daughter isotopes. The decay of each single atom occurs spontaneously, and the decay of an initial population of identical atoms over time t, follows a decaying exponential distribution, e−λt, where λ is called a decay constant. One of the properties of an isotope is its half-life, the time by which half of an initial number of identical parent radioisotopes have decayed to their daughters, which is inversely related to λ. Half-lives have been determined in laboratories for many radioisotopes (or radionuclides). These can range from nearly instantaneous (less than 10−21 seconds) to more than 1019 years.
The intermediate stages each emit the same amount of radioactivity as the original radioisotope (i.e. there is a one-to-one relationship between the numbers of decays in successive stages) but each stage releases a different quantity of energy. If and when equilibrium is achieved, each successive daughter isotope is present in direct proportion to its half-life; but since its activity is inversely proportional to its half-life, each nuclide in the decay chain finally contributes as many individual transformations as the head of the chain, though not the same energy. For example, uranium-238 is weakly radioactive, but pitchblende, a uranium ore, is 13 times more radioactive than the pure uranium metal because of the radium and other daughter isotopes it contains. Not only are unstable radium isotopes significant radioactivity emitters, but as the next stage in the decay chain they also generate radon, a heavy, inert, naturally occurring radioactive gas. Rock containing thorium and/or uranium (such as some granites) emits radon gas that can accumulate in enclosed places such as basements or underground mines.
All the elements and isotopes found on Earth, with the exceptions of hydrogen, deuterium, helium, helium-3, and perhaps trace amounts of stable lithium and beryllium isotopes which were created in the Big Bang, were created by the s-process or the r-process in stars, and for those to be today a part of the Earth, must have been created not later than 4.5 billion years ago. All the elements created more than 4.5 billion years ago are termed primordial, meaning they were generated by the universe's stellar processes. At the time when they were created, those that were unstable began decaying immediately. All the isotopes which have half-lives less than 100 million years have been reduced to 2.8×10−12% or less of whatever original amounts were created and captured by Earth's accretion; they are of trace quantity today, or have decayed away altogether. There are only two other methods to create isotopes: artificially, inside a man-made (or perhaps a natural) reactor, or through decay of a parent isotopic species, the process known as the decay chain.
Unstable isotopes decay to their daughter products (which may sometimes be even more unstable) at a given rate; eventually, often after a series of decays, a stable isotope is reached: there are about 200 stable isotopes in the universe. In stable isotopes, light elements typically have a lower ratio of neutrons to protons in their nucleus than heavier elements. Light elements such as helium-4 have close to a 1:1 neutron:proton ratio. The heaviest elements such as lead have close to 1.5 neutrons per proton(e.g. 1.536 in lead-208). No nuclide heavier than lead-208 is stable; these heavier elements have to shed mass to achieve stability, most usually as alpha decay. The other common decay method for isotopes with a high neutron to proton ratio (n/p) is beta decay, in which the nuclide changes elemental identity while keeping the same mass and lowering its n/p ratio. For some isotopes with a relatively low n/p ratio, there is an inverse beta decay, by which a proton is transformed into a neutron, thus moving towards a stable isotope; however, since fission almost always produces products which are neutron heavy, positron emission is relatively rare compared to electron emission. There are many relatively short beta decay chains, at least two (a heavy, beta decay and a light, positron decay) for every discrete weight up to around 207 and some beyond, but for the higher mass elements (isotopes heavier than lead) there are only four pathways which encompass all decay chains. This is because there are just two main decay methods: alpha radiation, which reduces the mass by 4 atomic mass units (amu), and beta, which does not change the atomic mass at all (just the atomic number and the p/n ratio). The four paths are termed 4n, 4n + 1, 4n + 2, and 4n + 3; the remainder from dividing the atomic mass by four gives the chain the isotope will use to decay. There are other decay modes, but they invariably occur at a lower probability than alpha or beta decay. (It should not be supposed that these chains have no branches: the diagram below shows a few branches of chains, and in reality there are many more, because there are many more isotopes possible than are shown in the diagram.) For example, the third atom of nihonium-278 synthesised underwent six alpha decays down to mendelevium-254, followed by an electron capture (a form of beta decay) to fermium-254, and then a seventh alpha to californium-250, upon which it would have followed the 4n + 2 chain as given in this article. However, the heaviest superheavy nuclides synthesised do not reach the four decay chains, because they reach a spontaneously fissioning nuclide after a few alpha decays that terminates the chain: this is what happened to the first two atoms of nihonium-278 synthesised, as well as to all heavier nuclides produced.
Three of those chains have a long-lived isotope (or nuclide) near the top; this long-lived isotope is a bottleneck in the process through which the chain flows very slowly, and keeps the chain below them "alive" with flow. The three long-lived nuclides are uranium-238 (half-life=4.5 billion years), uranium-235 (half-life=700 million years) and thorium-232 (half-life=14 billion years). The fourth chain has no such long lasting bottleneck isotope, so almost all of the isotopes in that chain have long since decayed down to very near the stability at the bottom. Near the end of that chain is bismuth-209, which was long thought to be stable. Recently, however, bismuth-209 was found to be unstable with a half-life of 19 billion billion years; it is the last step before stable thallium-205. In the distant past, around the time that the solar system formed, there were more kinds of unstable high-weight isotopes available, and the four chains were longer with isotopes that have since decayed away. Today we have manufactured extinct isotopes, which again take their former places: plutonium-239, the nuclear bomb fuel, as the major example has a half-life of "only" 24,500 years, and decays by alpha emission into uranium-235. In particular, we have through the large-scale production of neptunium-237 successfully resurrected the hitherto extinct fourth chain. The tables below hence start the four decay chains at isotopes of californium with mass numbers from 249 to 252.
Types of decay
The four most common modes of radioactive decay are: alpha decay, beta decay, inverse beta decay (considered as both positron emission and electron capture), and isomeric transition. Of these decay processes, only alpha decay changes the atomic mass number (A) of the nucleus, and always decreases it by four. Because of this, almost any decay will result in a nucleus whose atomic mass number has the same residue mod 4, dividing all nuclides into four chains. The members of any possible decay chain must be drawn entirely from one of these classes. All four chains also produce helium-4 (alpha particles are helium-4 nuclei).
Three main decay chains (or families) are observed in nature, commonly called the thorium series, the radium or uranium series, and the actinium series, representing three of these four classes, and ending in three different, stable isotopes of lead. The mass number of every isotope in these chains can be represented as A = 4n, A = 4n + 2, and A = 4n + 3, respectively. The long-lived starting isotopes of these three isotopes, respectively thorium-232, uranium-238, and uranium-235, have existed since the formation of the earth, ignoring the artificial isotopes and their decays since the 1940s.
Due to the relatively short half-life of its starting isotope neptunium-237 (2.14 million years), the fourth chain, the neptunium series with A = 4n + 1, is already extinct in nature, except for the final rate-limiting step, decay of bismuth-209. Traces of 237Np and its decay products still do occur in nature, however, as a result of neutron capture in uranium ore. The ending isotope of this chain is now known to be thallium-205. Some older sources give the final isotope as bismuth-209, but it was recently discovered that it is very slightly radioactive, with a half-life of 2.01×1019 years.
There are also non-transuranic decay chains of unstable isotopes of light elements, for example those of magnesium-28 and chlorine-39. On Earth, most of the starting isotopes of these chains before 1945 were generated by cosmic radiation. Since 1945, the testing and use of nuclear weapons has also released numerous radioactive fission products. Almost all such isotopes decay by either β− or β+ decay modes, changing from one element to another without changing atomic mass. These later daughter products, being closer to stability, generally have longer half-lives until they finally decay into stability.
Actinide alpha decay chains
Actinides and fission products by half-life
|Actinides by decay chain||Half-life
|Fission products of 235U by yield|
No fission products
|226Ra№||247Bk||1.3 k – 1.6 k|
|240Pu||229Th||246Cmƒ||243Amƒ||4.7 k – 7.4 k|
|245Cmƒ||250Cm||8.3 k – 8.5 k|
|230Th№||231Pa№||32 k – 76 k|
|236Npƒ||233Uƒ||234U№||150 k – 250 k||‡||99Tc₡||126Sn|
|248Cm||242Pu||327 k – 375 k||79Se₡|
|237Npƒ||2.1 M – 6.5 M||135Cs₡||107Pd|
|236U||247Cmƒ||15 M – 24 M||129I₡|
... nor beyond 15.7 M years
|232Th№||238U№||235Uƒ№||0.7 G – 14.1 G|
Legend for superscript symbols
In the four tables below, the minor branches of decay (with the branching probability of less than 0.0001%) are omitted. The energy release includes the total kinetic energy of all the emitted particles (electrons, alpha particles, gamma quanta, neutrinos, Auger electrons and X-rays) and the recoil nucleus, assuming that the original nucleus was at rest. The letter 'a' represents a year (from the Latin annus).
In the tables below (except neptunium), the historic names of the naturally occurring nuclides are also given. These names were used at the time when the decay chains were first discovered and investigated. From these historical names one can locate the particular chain to which the nuclide belongs, and replace it with its modern name.
The three naturally-occurring actinide alpha decay chains given below—thorium, uranium/radium (from U-238), and actinium (from U-235)—each ends with its own specific lead isotope (Pb-208, Pb-206, and Pb-207 respectively). All these isotopes are stable and are also present in nature as primordial nuclides, but their excess amounts in comparison with lead-204 (which has only a primordial origin) can be used in the technique of uranium-lead dating to date rocks.
The 4n chain of Th-232 is commonly called the "thorium series" or "thorium cascade". Beginning with naturally occurring thorium-232, this series includes the following elements: actinium, bismuth, lead, polonium, radium, radon and thallium. All are present, at least transiently, in any natural thorium-containing sample, whether metal, compound, or mineral. The series terminates with lead-208.
The total energy released from thorium-232 to lead-208, including the energy lost to neutrinos, is 42.6 MeV.
|nuclide||historic name (short)||historic name (long)||decay mode||half-life
|energy released, MeV||product of decay|
|228Ra||MsTh1||Mesothorium 1||β−||5.75 a||0.046||228Ac|
|228Ac||MsTh2||Mesothorium 2||β−||6.25 h||2.124||228Th|
|224Ra||ThX||Thorium X||α||3.6319 d||5.789||220Rn|
|216Po||ThA||Thorium A||α||0.145 s||6.906||212Pb|
|212Pb||ThB||Thorium B||β−||10.64 h||0.570||212Bi|
|212Bi||ThC||Thorium C||β− 64.06%
|212Po||ThC′||Thorium C′||α||299 ns||8.784 ||208Pb|
|208Tl||ThC″||Thorium C″||β−||3.053 min||1.803 ||208Pb|
The 4n + 1 chain of 237Np is commonly called the "neptunium series" or "neptunium cascade". In this series, only two of the isotopes involved are found naturally in significant quantities, namely the final two: bismuth-209 and thallium-205. Some of the other isotopes have been detected in nature, originating from trace quantities of 237Np produced by the (n,2n) knockout reaction in primordial 238U. A smoke detector containing an americium-241 ionization chamber accumulates a significant amount of neptunium-237 as its americium decays; the following elements are also present in it, at least transiently, as decay products of the neptunium: actinium, astatine, bismuth, francium, lead, polonium, protactinium, radium, thallium, thorium, and uranium. Since this series was only discovered and studied in 1947–1948, its nuclides do not have historic names. One unique trait of this decay chain is that the noble gas radon is only produced in a rare branch and not the main decay sequence; thus, it does not migrate through rock nearly as much as the other three decay chains.
The total energy released from californium-249 to thallium-205, including the energy lost to neutrinos, is 66.8 MeV.
|energy released, MeV||product of decay|
The 4n+2 chain of uranium-238 is called the "uranium series" or "radium series". Beginning with naturally occurring uranium-238, this series includes the following elements: astatine, bismuth, lead, polonium, protactinium, radium, radon, thallium, and thorium. All are present, at least transiently, in any natural uranium-containing sample, whether metal, compound, or mineral. The series terminates with lead-206.
The total energy released from uranium-238 to lead-206, including the energy lost to neutrinos, is 51.7 MeV.
The 4n+3 chain of uranium-235 is commonly called the "actinium series" or "actinium cascade". Beginning with the naturally-occurring isotope U-235, this decay series includes the following elements: actinium, astatine, bismuth, francium, lead, polonium, protactinium, radium, radon, thallium, and thorium. All are present, at least transiently, in any sample containing uranium-235, whether metal, compound, ore, or mineral. This series terminates with the stable isotope lead-207.
The total energy released from uranium-235 to lead-207, including the energy lost to neutrinos, is 46.4 MeV.
|nuclide||historic name (short)||historic name (long)||decay mode||half-life
|energy released, MeV||product of decay|
|235U||AcU||Actin Uranium||α||7.04·108 a||4.678||231Th|
|231Th||UY||Uranium Y||β−||25.52 h||0.391||231Pa|
|223Fr||AcK||Actinium K||β− 99.994%
|223Ra||AcX||Actinium X||α||11.43 d||5.979||219Rn|
|215Po||AcA||Actinium A||α 99.99977%
|211Pb||AcB||Actinium B||β−||36.1 min||1.367||211Bi|
|211Bi||AcC||Actinium C||α 99.724%
|211Po||AcC'||Actinium C'||α||516 ms||7.595||207Pb|
|207Tl||AcC"||Actinium C"||β−||4.77 min||1.418||207Pb|
- Nuclear physics
- Radioactive decay
- Valley of stability
- Decay product
- Radioisotopes (radionuclide)
- Radiometric dating
- "Archived copy". Archived from the original on 2008-09-20. Retrieved 2008-06-26.CS1 maint: archived copy as title (link)
- Koch, Lothar (2000). Transuranium Elements, in Ullmann's Encyclopedia of Industrial Chemistry. Wiley. doi:10.1002/14356007.a27_167.
- Peppard, D. F.; Mason, G. W.; Gray, P. R.; Mech, J. F. (1952). "Occurrence of the (4n + 1) series in nature" (PDF). Journal of the American Chemical Society. 74 (23): 6081–6084. doi:10.1021/ja01143a074.
- Audi, G.; Kondev, F. G.; Wang, M.; Huang, W. J.; Naimi, S. (2017). "The NUBASE2016 evaluation of nuclear properties" (PDF). Chinese Physics C. 41 (3): 030001. Bibcode:2017ChPhC..41c0001A. doi:10.1088/1674-1137/41/3/030001.
- Plus radium (element 88). While actually a sub-actinide, it immediately precedes actinium (89) and follows a three-element gap of instability after polonium (84) where no nuclides have half-lives of at least four years (the longest-lived nuclide in the gap is radon-222 with a half life of less than four days). Radium's longest lived isotope, at 1,600 years, thus merits the element's inclusion here.
- Specifically from thermal neutron fission of U-235, e.g. in a typical nuclear reactor.
- Milsted, J.; Friedman, A. M.; Stevens, C. M. (1965). "The alpha half-life of berkelium-247; a new long-lived isomer of berkelium-248". Nuclear Physics. 71 (2): 299. Bibcode:1965NucPh..71..299M. doi:10.1016/0029-5582(65)90719-4.
"The isotopic analyses disclosed a species of mass 248 in constant abundance in three samples analysed over a period of about 10 months. This was ascribed to an isomer of Bk248 with a half-life greater than 9 y. No growth of Cf248 was detected, and a lower limit for the β− half-life can be set at about 104 y. No alpha activity attributable to the new isomer has been detected; the alpha half-life is probably greater than 300 y."
- This is the heaviest nuclide with a half-life of at least four years before the "Sea of Instability".
- Excluding those "classically stable" nuclides with half-lives significantly in excess of 232Th; e.g., while 113mCd has a half-life of only fourteen years, that of 113Cd is nearly eight quadrillion years.
- Trenn, Thaddeus J. (1978). "Thoruranium (U-236) as the extinct natural parent of thorium: The premature falsification of an essentially correct theory". Annals of Science. 35 (6): 581–97. doi:10.1080/00033797800200441.
- Thoennessen, M. (2016). The Discovery of Isotopes: A Complete Compilation. Springer. p. 20. doi:10.1007/978-3-319-31763-2. ISBN 978-3-319-31761-8. LCCN 2016935977.
- Thoennessen, M. (2016). The Discovery of Isotopes: A Complete Compilation. Springer. p. 19. doi:10.1007/978-3-319-31763-2. ISBN 978-3-319-31761-8. LCCN 2016935977.
- C.M. Lederer; J.M. Hollander; I. Perlman (1968). Table of Isotopes (6th ed.). New York: John Wiley & Sons.
|Wikimedia Commons has media related to Decay chain.|
- Nucleonica nuclear science portal
- Nucleonica's Decay Engine for professional online decay calculations
- EPA – Radioactive Decay
- Government website listing isotopes and decay energies
- National Nuclear Data Center – freely available databases that can be used to check or construct decay chains
- IAEA – Live Chart of Nuclides (with decay chains)
- Decay Chain Finder |
Presentation on theme: "Tangential and Centripetal Acceleration"— Presentation transcript:
1 Tangential and Centripetal Acceleration Chapter 7 section 2
2 Linear and Angular Relationships It is easier to describe the motion of an object that is in a circular path through angular quantities, but sometimes its useful to understand how the angular quantities affect the linear quantities of an object in a circular path.Example:Velocity of a bat as it hits a ball
3 What is a tangent?Tangent – A line that just touches the edge of a point in a circular path and forms a 90º angle to the radius of the circle.Tangentr
4 Tangential SpeedTangential Speed – The instantaneous linear speed of an object directed along the tangent to the object’s circular path.
5 Tangential Speed vs. Angular Speed Imagine two points on a circle.One point is 1 meter away from the axis and another is 2 meters away.The points start to rotate.Both points have the same angular speed because the angle between the initial and final positions are exactly the same.Both points have different tangential speeds. The further away from the axis, the faster the point must travel.
6 Tangential Speed Explained In order for both points to maintain the same angular displacement, the point further away from the axis has a longer radius and must travel through a larger arc length in the same amount of time.The ratio between the arc length and radius must remain constant within a circle to keep the angle the same.
7 Tangential Speed Equation vt = rωvt = Tangential SpeedUnits: length per time (m/s)r = Radiusω = Angular speedUnits for angular speed must be in (rad/s)
8 Example ProblemA golfer has an angular speed of 6.3 rad/s for his swing. He can chose between two drivers, one placing the club head 1.9 m from his axis of rotation and the other placing it 1.7 m from the axis.Find the tangential speed of each driver.Which will hit the ball further?
9 Example Problem Answer 1.9 m driver tangential speed = 12m/s1.7 m driver tangential speed = 11m/sThe longer driver will hit the ball further given the knowledge learned from projectile motion.
10 Tangential Acceleration Tangential Acceleration – The instantaneous linear acceleration of an object directed along the tangent to the object’s circular path.
11 Tangential Acceleration Explained Going back to the golfer example problem.When he is getting ready to swing, the angular speed is zero and as he swings the driver down towards the ball, the angular speed increasesHence there is an angular accelerationSame holds true for tangential accelerationThey are angular and tangential acceleration are both related to one another.
12 Tangential Acceleration Equation at = rαat = Tangential accelerationUnits: length per second per second(m/s²)r = radiusα = Angular accelerationUnits must be in (rad/s²)
13 Example ProblemA centrifuge starts from rest and accelerates to 10.4 rad/s in 2.4 seconds. What is the tangential acceleration of a vial that is 4.7 cm from the center?
15 Velocity Is a Vector Velocity is a vector quantity Has magnitude and directionUsing a car as an example if you travel at 30m/hr in a circle, is your velocity changing?Of course! Changing direction is changing velocity.Changing velocity means there is acceleration.
16 Centripetal Acceleration Centripetal Acceleration – The acceleration of an object directed towards the center of its circular path.
17 Graphical Look at Changing Velocity See how Δv points towards the center of the circle. That means the acceleration points towards the center of the circle.viΔsvfrrviθΔvvf
19 Centripetal Acceleration vs. Centrifugal Acceleration Centripetal means, “Center-Seeking”Centrifugal means, “Center-Fleeing”Centrifugal acceleration is an imaginary acceleration and force.It is actually inertia in actionExample:Coat hanger and quarter trick
20 Example ProblemA cylindrical space station with a 115, radius rotates around its longitudinal axis at and angular speed of rad/s. Calculate the centripetal acceleration on a person at the following locations.At the center of the space stationHalfway to the rim of the space stationAt the rim of the space station
22 Tangential and Centripetal Acceleration Tangential and centripetal accelerations are always perpendicular.Both can happen at the same time.Increasing a car’s speed while making a turn into a corner of a racetrack.Tangential component is due to changing speed.Centripetal component is due to changing direction.
23 Total AccelerationIf both accelerations are happening at the same time, then the Pythagorean Theorem must be used to find the total acceleration.The direction of the total acceleration can be found using the tangent function.The acceleration still points towards the center of the circle |
THE INVERSE of a function reverses the action of that function.
Say, for example, that a function f acts on 5, producing f(5). Then if g is the inverse of f, then g acting on f(5) will bring back 5.
g(f(5)) = 5.
Actually, g must do that for all values in the domain of f. And f must do that for all values in the domain of g. Here is the definition:
Functions f(x) and g(x) are inverses of one another if:
f(g(x)) = x and g(f(x)) = x,
for all values of x in their respective domains.
Problem 1. Let f(x) and g(x) be inverses. Then if
f(0) = 8,
what is the value of g(8)?
g(8) = 0.
For, f, acting on 0, produces 8. Therefore, since g is the inverse of f, when it acts on 8, it will bring back 0.
g(f(x)) = x.
Example 1. Addition and subtraction are inverses. Subtracting a specific number reverses, or undoes, the result of adding it.
In the language of functions, let
f(x) = x + 2, and g(x) = x − 2.
f(x) adds 2 to its argument. g(x) subtracts 2.
Upon applying the definition:
f(g(x)) = f(x − 2) = (x − 2) + 2 = x,
g(f(x)) = g(x + 2) = (x + 2) − 2 = x.
The definition is satisfied. The functions f and g are inverses.
Problem 2. Let f(x) = x2 and g(x) = x½. Show that they are inverses of one another. (The domain of f must be restricted to x 0.)
To see the answer, pass your mouse over the colored area.
f(g(x)) = f(x½) = (x½)2 = x,
g(f(x)) = g(x2) = (x2)½ = x.
When we write
(x + 3)4,
then x + 3 is the argument of the function
f(x) = x4.
f is that function which takes the 4th power of its argument.
Its inverse, g(x), will take the 4th root.
g(x) = x¼.
Example 2. Solve for x:
(x + 3)4 = 16.
Solution. To do that, we must free, or extract, the argument x + 3. We must write
x + 3 = . . .
x + 3 = 16¼ = 2.
x = 2 − 3 = −1.
Problem 3. Solve for x:
The inverse of taking the 5th root is taking the 5th power. Therefore, on taking the 5th power of both sides -- and thus freeing the argument:
x − 4 = 25 = 32.
x = 36.
But say that we want to find the inverse of a this function:
y = 3x − 4.
Then we can "invert" it by solving for x.
Upon exchanging sides:
Exchange the variables:
That function is the inverse of y = 3x − 4.
In other words, the inverse of the function that first multiplies by 3 and then subtracts 4, is the function that first adds 4 --
y = x + 4
-- and then divides by 3:
In any case, to find the inverse of a function y = f(x):
Solve for x, then exchange the variables.
Problem 4. What function is the inverse of y = 5x?
On solving for x:
Clearly, dividing by 5 is the inverse of multiplying by 5.
Problem 5. a) Let f(x) = −½x + 1. Can you immediately write g(x) its inverse?
g(x) = −2x + 2.
For, f is the function that multiplies its argument by −½ --equivalently, divides by −2 -- and then adds 1. Its inverse will therefore first subtract 1:
x − 1
and then multiply by −2:
−2(x − 1) = −2x + 2.
b) Prove that f(x) and g(x) are inverses.
f(g(x)) = −½(−2x + 2) + 1 = x − 1 + 1 = x.
g(f(x)) = −2(−½x + 1) + 2 = x − 2 + 2 = x.
The function I(x) = x is called the identity function. It always returns x.
As a notation for the inverse of a function f, we sometimes see f −1 ("f inverse"). "−1" is not an exponent. That notation is used because in the language of composition of functions, we can write:
f o f −1 = I
This is similar in form to the multiplication of numbers, a· a−1 = 1.
For the inverse trigonometric functions, see Topic 19 of Trigonometry.
The graph of an inverse function
The graph of the inverse of a function f(x) can be found as follows:
Reflect the graph about the x-axis, then rotate it 90° counterclockwise
(If we take the graph on the left to be the right-hand branch of y = x2, then the graph on the right is its inverse, y = .)
To see that that is the graph of the inverse, let A be any point on
the graph of f(x), let its coördinates be (a, b), let it be a distance d from the origin C, and let AC make an angle θ with the x-axis; triangle ABC is right angled.
The figure on the left shows the reflection of A about the x-axis to the point D. The figure on the right shows the rotation of D 90° counterclockwise to the point C'.
We will see that the coördinates of C' are (b, a) -- and those are coördinates on the graph of the inverse of f (x) For if we call that inverse g(x), then according to the figure on the left,
f (a) = b.
And g(b) -- the figure on the right -- returns us to a:
g(b) = a.
The definition of the inverse is satisfied.
To see that the coördinates of C' are (b, a), consider that since angle C'A'D is 90°, then C'A' makes an angle of 90° − θ with the x-axis. That is, angle C'A'B' is the complement of angle B'A'D, which is angle θ. Therefore in the right triangle A'B'C', the angle at C' is equal to θ.
But the angle at A is the complement of θ. Therefore the triangles ABC, A'B'C' are congruent (Angle-side-angle), and those sides are equal that are opposite the equal angles:
A'B' is equal to AB -- which is b, the y-coördinate of f (x).
B'C' is equal to BC -- which is a, the x-coördinate of f (x).
Therefore the coördinates of C' are (b, a).
So, when each point (a, b) on f(x) is transformed into (b, a), then the graph that results is its inverse.
Each point (a, b) will also be transformed into (b, a) when (a, b) is reflected about the line y = x.
Therefore we say that the graphs of a function and its inverse are symmetrical with respect to the straight line y = x.
Please make a donation to keep TheMathPage online.
Copyright © 2013 Lawrence Spector
Questions or comments? |
Introduction to general relativity
General relativity is a theory of gravitation that was developed by Albert Einstein between 1907 and 1915. According to general relativity, the observed gravitational attraction between masses results from their warping of space and time.
By the beginning of the 20th century, Newton's law of universal gravitation had been accepted for more than two hundred years as a valid description of the gravitational force between masses. In Newton's model, gravity is the result of an attractive force between massive objects. Although even Newton was troubled by the unknown nature of that force, the basic framework was extremely successful at describing motion.
Experiments and observations show that Einstein's description of gravitation accounts for several effects that are unexplained by Newton's law, such as minute anomalies in the orbits of Mercury and other planets. General relativity also predicts novel effects of gravity, such as gravitational waves, gravitational lensing and an effect of gravity on time known as gravitational time dilation. Many of these predictions have been confirmed by experiment, while others are the subject of ongoing research. For example, although there is indirect evidence for gravitational waves, direct evidence of their existence is still being sought by several teams of scientists in experiments such as the LIGO and GEO 600 projects.
General relativity has developed into an essential tool in modern astrophysics. It provides the foundation for the current understanding of black holes, regions of space where gravitational attraction is so strong that not even light can escape. Their strong gravity is thought to be responsible for the intense radiation emitted by certain types of astronomical objects (such as active galactic nuclei or microquasars). General relativity is also part of the framework of the standard Big Bang model of cosmology.
Although general relativity is not the only relativistic theory of gravity, it is the simplest such theory that is consistent with the experimental data. Nevertheless, a number of open questions remain, the most fundamental of which is how general relativity can be reconciled with the laws of quantum physics to produce a complete and self-consistent theory of quantum gravity.
From special to general relativity
In September 1905, Albert Einstein published his theory of special relativity, which reconciles Newton's laws of motion with electrodynamics (the interaction between objects with electric charge). Special relativity introduced a new framework for all of physics by proposing new concepts of space and time. Some then-accepted physical theories were inconsistent with that framework; a key example was Newton's theory of gravity, which describes the mutual attraction experienced by bodies due to their mass.
Several physicists, including Einstein, searched for a theory that would reconcile Newton's law of gravity and special relativity. Only Einstein's theory proved to be consistent with experiments and observations. To understand the theory's basic ideas, it is instructive to follow Einstein's thinking between 1907 and 1915, from his simple thought experiment involving an observer in free fall to his fully geometric theory of gravity.
A person in a free-falling elevator experiences weightlessness, and objects either float motionless or drift at constant speed. Since everything in the elevator is falling together, no gravitational effect can be observed. In this way, the experiences of an observer in free fall are indistinguishable from those of an observer in deep space, far from any significant source of gravity. Such observers are the privileged ("inertial") observers Einstein described in his theory of special relativity: observers for whom light travels along straight lines at constant speed.
Einstein hypothesized that the similar experiences of weightless observers and inertial observers in special relativity represented a fundamental property of gravity, and he made this the cornerstone of his theory of general relativity, formalized in his equivalence principle. Roughly speaking, the principle states that a person in a free-falling elevator cannot tell that he is in free fall. Every experiment in such a free-falling environment has the same results as it would for an observer at rest or moving uniformly in deep space, far from all sources of gravity.
Gravity and acceleration
Most effects of gravity vanish in free fall, but effects that seem the same as those of gravity can be produced by an accelerated frame of reference. An observer in a closed room cannot tell which of the following is true:
- Objects are falling to the floor because the room is resting on the surface of the Earth and the objects are being pulled down by gravity.
- Objects are falling to the floor because the room is aboard a rocket in space, which is accelerating at 9.81 m/s2 and is far from any source of gravity. The objects are being pulled towards the floor by the same "inertial force" that presses the driver of an accelerating car into the back of his seat.
Conversely, any effect observed in an accelerated reference frame should also be observed in a gravitational field of corresponding strength. This principle allowed Einstein to predict several novel effects of gravity in 1907, as explained in the next section.
An observer in an accelerated reference frame must introduce what physicists call fictitious forces to account for the acceleration experienced by himself and objects around him. One example, the force pressing the driver of an accelerating car into his or her seat, has already been mentioned; another is the force you can feel pulling your arms up and out if you attempt to spin around like a top. Einstein's master insight was that the constant, familiar pull of the Earth's gravitational field is fundamentally the same as these fictitious forces. The apparent magnitude of the fictitious forces always appears to be proportional to the mass of any object on which they act - for instance, the driver's seat exerts just enough force to accelerate the driver at the same rate as the car. By analogy, Einstein proposed that an object in a gravitational field should feel a gravitational force proportional to its mass, as embodied in Newton's law of gravitation.
In 1907, Einstein was still eight years away from completing the general theory of relativity. Nonetheless, he was able to make a number of novel, testable predictions that were based on his starting point for developing his new theory: the equivalence principle.
The first new effect is the gravitational frequency shift of light. Consider two observers aboard an accelerating rocket-ship. Aboard such a ship, there is a natural concept of "up" and "down": the direction in which the ship accelerates is "up", and unattached objects accelerate in the opposite direction, falling "downward". Assume that one of the observers is "higher up" than the other. When the lower observer sends a light signal to the higher observer, the acceleration causes the light to be red-shifted, as may be calculated from special relativity; the second observer will measure a lower frequency for the light than the first. Conversely, light sent from the higher observer to the lower is blue-shifted, that is, shifted towards higher frequencies. Einstein argued that such frequency shifts must also be observed in a gravitational field. This is illustrated in the figure at left, which shows a light wave that is gradually red-shifted as it works its way upwards against the gravitational acceleration. This effect has been confirmed experimentally, as described below.
This gravitational frequency shift corresponds to a gravitational time dilation: Since the "higher" observer measures the same light wave to have a lower frequency than the "lower" observer, time must be passing faster for the higher observer. Thus, time runs more slowly for observers who are lower in a gravitational field.
It is important to stress that, for each observer, there are no observable changes of the flow of time for events or processes that are at rest in his or her reference frame. Five-minute-eggs as timed by each observer's clock have the same consistency; as one year passes on each clock, each observer ages by that amount; each clock, in short, is in perfect agreement with all processes happening in its immediate vicinity. It is only when the clocks are compared between separate observers that one can notice that time runs more slowly for the lower observer than for the higher. This effect is minute, but it too has been confirmed experimentally in multiple experiments, as described below.
In a similar way, Einstein predicted the gravitational deflection of light: in a gravitational field, light is deflected downward. Quantitatively, his results were off by a factor of two; the correct derivation requires a more complete formulation of the theory of general relativity, not just the equivalence principle.
The equivalence between gravitational and inertial effects does not constitute a complete theory of gravity. When it comes to explaining gravity near our own location on the Earth's surface, noting that our reference frame is not in free fall, so that fictitious forces are to be expected, provides a suitable explanation. But a freely falling reference frame on one side of the Earth cannot explain why the people on the opposite side of the Earth experience a gravitational pull in the opposite direction.
A more basic manifestation of the same effect involves two bodies that are falling side by side towards the Earth. In a reference frame that is in free fall alongside these bodies, they appear to hover weightlessly – but not exactly so. These bodies are not falling in precisely the same direction, but towards a single point in space: namely, the Earth's centre of gravity. Consequently, there is a component of each body's motion towards the other (see the figure). In a small environment such as a freely falling lift, this relative acceleration is minuscule, while for skydivers on opposite sides of the Earth, the effect is large. Such differences in force are also responsible for the tides in the Earth's oceans, so the term " tidal effect" is used for this phenomenon.
The equivalence between inertia and gravity cannot explain tidal effects – it cannot explain variations in the gravitational field. For that, a theory is needed which describes the way that matter (such as the large mass of the Earth) affects the inertial environment around it.
From acceleration to geometry
In exploring the equivalence of gravity and acceleration as well as the role of tidal forces, Einstein discovered several analogies with the geometry of surfaces. An example is the transition from an inertial reference frame (in which free particles coast along straight paths at constant speeds) to a rotating reference frame (in which extra terms corresponding to fictitious forces have to be introduced in order to explain particle motion): this is analogous to the transition from a Cartesian coordinate system (in which the coordinate lines are straight lines) to a curved coordinate system (where coordinate lines need not be straight).
A deeper analogy relates tidal forces with a property of surfaces called curvature. For gravitational fields, the absence or presence of tidal forces determines whether or not the influence of gravity can be eliminated by choosing a freely falling reference frame. Similarly, the absence or presence of curvature determines whether or not a surface is equivalent to a plane. In the summer of 1912, inspired by these analogies, Einstein searched for a geometric formulation of gravity.
The elementary objects of geometry – points, lines, triangles – are traditionally defined in three-dimensional space or on two-dimensional surfaces. In 1907, the mathematician Hermann Minkowski (who was Einstein's former mathematics professor in Swiss Federal Polytechnic) introduced a geometric formulation of Einstein's special theory of relativity in which the geometry included not only space, but also time. The basic entity of this new geometry is four- dimensional spacetime. The orbits of moving bodies are curves in spacetime; the orbits of bodies moving at constant speed without changing direction correspond to straight lines.
For surfaces, the generalization from the geometry of a plane – a flat surface – to that of a general curved surface had been described in the early 19th century by Carl Friedrich Gauss. This description had in turn been generalized to higher-dimensional spaces in a mathematical formalism introduced by Bernhard Riemann in the 1850s. With the help of Riemannian geometry, Einstein formulated a geometric description of gravity in which Minkowski's spacetime is replaced by distorted, curved spacetime, just as curved surfaces are a generalization of ordinary plane surfaces.
After he had realized the validity of this geometric analogy, it took Einstein a further three years to find the missing cornerstone of his theory: the equations describing how matter influences spacetime's curvature. Having formulated what are now known as Einstein's equations (or, more precisely, his field equations of gravity), he presented his new theory of gravity at several sessions of the Prussian Academy of Sciences in late 1915.
Geometry and gravitation
Paraphrasing John Wheeler, Einstein's geometric theory of gravity can be summarized thus: spacetime tells matter how to move; matter tells spacetime how to curve. What this means is addressed in the following three sections, which explore the motion of so-called test particles, examine which properties of matter serve as a source for gravity, and, finally, introduce Einstein's equations, which relate these matter properties to the curvature of spacetime.
Probing the gravitational field
In order to map a body's gravitational influence, it is useful to think about what physicists call probe or test particles: particles that are influenced by gravity, but are so small and light that we can neglect their own gravitational effect. In the absence of gravity and other external forces, a test particle moves along a straight line at a constant speed. In the language of spacetime, this is equivalent to saying that such test particles move along straight world lines in spacetime. In the presence of gravity, spacetime is non-Euclidean, or curved, and in curved spacetime straight world lines may not exist. Instead, test particles move along lines called geodesics, which are "as straight as possible".
A simple analogy is the following: In geodesy, the science of measuring Earth's size and shape, a geodesic (from Greek "geo", Earth, and "daiein", to divide) is the shortest route between two points on the Earth's surface. Approximately, such a route is a segment of a great circle, such as a line of longitude or the equator. These paths are certainly not straight, simply because they must follow the curvature of the Earth's surface. But they are as straight as is possible subject to this constraint.
The properties of geodesics differ from those of straight lines. For example, on a plane, parallel lines never meet, but this is not so for geodesics on the surface of the Earth: for example, lines of longitude are parallel at the equator, but intersect at the poles. Analogously, the world lines of test particles in free fall are spacetime geodesics, the straightest possible lines in spacetime. But still there are crucial differences between them and the truly straight lines that can be traced out in the gravity-free spacetime of special relativity. In special relativity, parallel geodesics remain parallel. In a gravitational field with tidal effects, this will not, in general, be the case. If, for example, two bodies are initially at rest relative to each other, but are then dropped in the Earth's gravitational field, they will move towards each other as they fall towards the Earth's centre.
Compared with planets and other astronomical bodies, the objects of everyday life (people, cars, houses, even mountains) have little mass. Where such objects are concerned, the laws governing the behaviour of test particles are sufficient to describe what happens. Notably, in order to deflect a test particle from its geodesic path, an external force must be applied. A person sitting on a chair is trying to follow a geodesic, that is, to fall freely towards the centre of the Earth. But the chair applies an external upwards force preventing the person from falling. In this way, general relativity explains the daily experience of gravity on the surface of the Earth not as the downwards pull of a gravitational force, but as the upwards push of external forces. These forces deflect all bodies resting on the Earth's surface from the geodesics they would otherwise follow. For matter objects whose own gravitational influence cannot be neglected, the laws of motion are somewhat more complicated than for test particles, although it remains true that spacetime tells matter how to move.
Einstein's equations are the centerpiece of general relativity. They provide a precise formulation of the relationship between spacetime geometry and the properties of matter, using the language of mathematics. More concretely, they are formulated using the concepts of Riemannian geometry, in which the geometric properties of a space (or a spacetime) are described by a quantity called a metric. The metric encodes the information needed to compute the fundamental geometric notions of distance and angle in a curved space (or spacetime).
A spherical surface like that of the Earth provides a simple example. The location of any point on the surface can be described by two coordinates: the geographic latitude and longitude. Unlike the Cartesian coordinates of the plane, coordinate differences are not the same as distances on the surface, as shown in the diagram on the right: for someone at the equator, moving 30 degrees of longitude westward (magenta line) corresponds to a distance of roughly 3,300 kilometers (2,100 mi). On the other hand, someone at a latitude of 55 degrees, moving 30 degrees of longitude westward (blue line) covers a distance of merely 1,900 kilometers (1,200 mi). Coordinates therefore do not provide enough information to describe the geometry of a spherical surface, or indeed the geometry of any more complicated space or spacetime. That information is precisely what is encoded in the metric, which is a function defined at each point of the surface (or space, or spacetime) and relates coordinate differences to differences in distance. All other quantities that are of interest in geometry, such as the length of any given curve, or the angle at which two curves meet, can be computed from this metric function.
The metric function and its rate of change from point to point can be used to define a geometrical quantity called the Riemann curvature tensor, which describes exactly how the space (or spacetime) is curved at each point. In general relativity, the metric and the Riemann curvature tensor are quantities defined at each point in spacetime. As has already been mentioned, the matter content of the spacetime defines another quantity, the Energy-momentum tensor T, and the principle that "spacetime tells matter how to move, and matter tells spacetime how to curve" means that these quantities must be related to each other. Einstein formulated this relation by using the Riemann curvature tensor and the metric to define another geometrical quantity G, now called the Einstein tensor, which describes some aspects of the way spacetime is curved. Einstein's equation then states that
i.e., up to a constant multiple, the quantity G (which measures curvature) is equated with the quantity T (which measures matter content). The constants involved in this equation reflect the different theories that went into its making: π is one of the basic constants of geometry, G is the gravitational constant that is already present in Newtonian gravity, and c is the speed of light, the key constant in special relativity.
This equation is often referred to in the plural as Einstein's equations, since the quantities G and T are each determined by several functions of the coordinates of spacetime, and the equations equate each of these component functions. A solution of these equations describes a particular geometry of space and time; for example, the Schwarzschild solution describes the geometry around a spherical, non-rotating mass such as a star or a black hole, whereas the Kerr solution describes a rotating black hole. Still other solutions can describe a gravitational wave or, in the case of the Friedmann–Lemaître–Robertson–Walker solution, an expanding universe. The simplest solution is the uncurved Minkowski spacetime, the spacetime described by special relativity.
No scientific theory is apodictically true; each is a model that must be checked by experiment. Newton's law of gravity was accepted because it accounted for the motion of planets and moons in the solar system with considerable accuracy. As the precision of experimental measurements gradually improved, some discrepancies with Newton's predictions were observed, and these were accounted for in the general theory of relativity. Similarly, the predictions of general relativity must also be checked with experiment, and Einstein himself devised three tests now known as the classical tests of the theory:
- Newtonian gravity predicts that the orbit which a single planet traces around a perfectly spherical star should be an ellipse. Einstein's theory predicts a more complicated curve: the planet behaves as if it were travelling around an ellipse, but at the same time, the ellipse as a whole is rotating slowly around the star. In the diagram on the right, the ellipse predicted by Newtonian gravity is shown in red, and part of the orbit predicted by Einstein in blue. For a planet orbiting the Sun, this deviation from Newton's orbits is known as the anomalous perihelion shift. The first measurement of this effect, for the planet Mercury, dates back to 1859. The most accurate results for Mercury and for other planets to date are based on measurements which were undertaken between 1966 and 1990, using radio telescopes. General relativity predicts the correct anomalous perihelion shift for all planets where this can be measured accurately (Mercury, Venus and the Earth).
- According to general relativity, light does not travel along straight lines when it propagates in a gravitational field. Instead, it is deflected in the presence of massive bodies. In particular, starlight is deflected as it passes near the Sun, leading to apparent shifts of up 1.75 arc seconds in the stars' positions in the night sky (an arc second is equal to 1/3600 of a degree). In the framework of Newtonian gravity, a heuristic argument can be made that leads to light deflection by half that amount. The different predictions can be tested by observing stars that are close to the Sun during a solar eclipse. In this way, a British expedition to West Africa in 1919, directed by Arthur Eddington, confirmed that Einstein's prediction was correct, and the Newtonian predictions wrong, via observation of the May 1919 eclipse. Eddington's results were not very accurate; subsequent observations of the deflection of the light of distant quasars by the Sun, which utilize highly accurate techniques of radio astronomy, have confirmed Eddington's results with significantly better precision (the first such measurements date from 1967, the most recent comprehensive analysis from 2004).
- Gravitational redshift was first measured in a laboratory setting in 1959 by Pound and Rebka. It is also seen in astrophysical measurements, notably for light escaping the White Dwarf Sirius B. The related gravitational time dilation effect has been measured by transporting atomic clocks to altitudes of between tens and tens of thousands of kilometers (first by Hafele and Keating in 1971; most accurately to date by Gravity Probe A launched in 1976).
Of these tests, only the perihelion advance of Mercury was known prior to Einstein's final publication of general relativity in 1916. The subsequent experimental confirmation of his other predictions, especially the first measurements of the deflection of light by the sun in 1919, catapulted Einstein to international stardom. These three experimental tests justified adopting general relativity over Newton's theory and, incidentally, over a number of alternatives to general relativity that had been proposed.
Further tests of general relativity include precision measurements of the Shapiro effect or gravitational time delay for light, most recently in 2002 by the Cassini space probe. One set of tests focuses on effects predicted by general relativity for the behaviour of gyroscopes travelling through space. One of these effects, geodetic precession, has been tested with the Lunar Laser Ranging Experiment (high precision measurements of the orbit of the Moon). Another, which is related to rotating masses, is called frame-dragging. The geodetic and frame-dragging effects were both tested by the Gravity Probe B satellite experiment launched in 2004, with results confirming relativity to within 0.5% and 15%, respectively, as of December 2008.
By cosmic standards, gravity throughout the solar system is weak. Since the differences between the predictions of Einstein's and Newton's theories are most pronounced when gravity is strong, physicists have long been interested in testing various relativistic effects in a setting with comparatively strong gravitational fields. This has become possible thanks to precision observations of binary pulsars. In such a star system, two highly compact neutron stars orbit each other. At least one of them is a pulsar – an astronomical object that emits a tight beam of radiowaves. These beams strike the Earth at very regular intervals, similarly to the way that the rotating beam of a lighthouse means that an observer sees the lighthouse blink, and can be observed as a highly regular series of pulses. General relativity predicts specific deviations from the regularity of these radio pulses. For instance, at times when the radio waves pass close to the other neutron star, they should be deflected by the star's gravitational field. The observed pulse patterns are impressively close to those predicted by general relativity.
One particular set of observations is related to eminently useful practical applications, namely to satellite navigation systems such as the Global Positioning System that are used both for precise positioning and timekeeping. Such systems rely on two sets of atomic clocks: clocks aboard satellites orbiting the Earth, and reference clocks stationed on the Earth's surface. General relativity predicts that these two sets of clocks should tick at slightly different rates, due to their different motions (an effect already predicted by special relativity) and their different positions within the Earth's gravitational field. In order to ensure the system's accuracy, the satellite clocks are either slowed down by a relativistic factor, or that same factor is made part of the evaluation algorithm. In turn, tests of the system's accuracy (especially the very thorough measurements that are part of the definition of universal coordinated time) are testament to the validity of the relativistic predictions.
A number of other tests have probed the validity of various versions of the equivalence principle; strictly speaking, all measurements of gravitational time dilation are tests of the weak version of that principle, not of general relativity itself. So far, general relativity has passed all observational tests.
Models based on general relativity play an important role in astrophysics, and the success of these models is further testament to the theory's validity.
Since light is deflected in a gravitational field, it is possible for the light of a distant object to reach an observer along two or more paths. For instance, light of a very distant object such as a quasar can pass along one side of a massive galaxy and be deflected slightly so as to reach an observer on Earth, while light passing along the opposite side of that same galaxy is deflected as well, reaching the same observer from a slightly different direction. As a result, that particular observer will see one astronomical object in two different places in the night sky. This kind of focussing is well-known when it comes to optical lenses, and hence the corresponding gravitational effect is called gravitational lensing.
Observational astronomy uses lensing effects as an important tool to infer properties of the lensing object. Even in cases where that object is not directly visible, the shape of a lensed image provides information about the mass distribution responsible for the light deflection. In particular, gravitational lensing provides one way to measure the distribution of dark matter, which does not give off light and can be observed only by its gravitational effects. One particularly interesting application are large-scale observations, where the lensing masses are spread out over a significant fraction of the observable universe, and can be used to obtain information about the large-scale properties and evolution of our cosmos.
Gravitational waves, a direct consequence of Einstein's theory, are distortions of geometry that propagate at the speed of light, and can be thought of as ripples in spacetime. They should not be confused with the gravity waves of fluid dynamics, which are a different concept.
Indirectly, the effect of gravitational waves has been detected in observations of specific binary stars. Such pairs of stars orbit each other and, as they do so, gradually lose energy by emitting gravitational waves. For ordinary stars like our sun, this energy loss would be too small to be detectable, but this energy loss was observed in 1974 in a binary pulsar called PSR1913+16. In such a system, one of the orbiting stars is a pulsar. This has two consequences: a pulsar is an extremely dense object known as a neutron star, for which gravitational wave emission is much stronger than for ordinary stars. Also, a pulsar emits a narrow beam of electromagnetic radiation from its magnetic poles. As the pulsar rotates, its beam sweeps over the Earth, where it is seen as a regular series of radio pulses, just as a ship at sea observes regular flashes of light from the rotating light in a lighthouse. This regular pattern of radio pulses functions as a highly accurate "clock". It can be used to time the double star's orbital period, and it reacts sensitively to distortions of space-time in its immediate neighbourhood.
The discoverers of PSR1913+16, Russell Hulse and Joseph Taylor, were awarded the Nobel Prize in Physics in 1993. Since then, several other binary pulsars have been found. The most useful are those in which both stars are pulsars, since they provide the most accurate tests of general relativity.
Currently, one major goal of research in relativity is the direct detection of gravitational waves. To this end, a number of land-based gravitational wave detectors are in operation, and a mission to launch a space-based detector, LISA, is currently under development, with a precursor mission ( LISA Pathfinder) due for launch in June 2013. If gravitational waves are detected, they could be used to obtain information about compact objects such as neutron stars and black holes, and also to probe the state of the early universe fractions of a second after the Big Bang.
When mass is concentrated into a sufficiently compact region of space, general relativity predicts the formation of a black hole – a region of space with a gravitational attraction so strong that not even light can escape. Certain types of black holes are thought to be the final state in the evolution of massive stars. On the other hand, supermassive black holes with the mass of millions or billions of Suns are assumed to reside in the cores of most galaxies, and they play a key role in current models of how galaxies have formed over the past billions of years.
Matter falling onto a compact object is one of the most efficient mechanisms for releasing energy in the form of radiation, and matter falling onto black holes is thought to be responsible for some of the brightest astronomical phenomena imaginable. Notable examples of great interest to astronomers are quasars and other types of active galactic nuclei. Under the right conditions, falling matter accumulating around a black hole can lead to the formation of jets, in which focused beams of matter are flung away into space at speeds near that of light.
There are several properties that make black holes most promising sources of gravitational waves. One reason is that black holes are the most compact objects that can orbit each other as part of a binary system; as a result, the gravitational waves emitted by such a system are especially strong. Another reason follows from what are called black hole uniqueness theorems: over time, black holes retain only a minimal set of distinguishing features (since different hair styles are a crucial part of what gives different people their different appearances, these theorems have become known as "no hair" theorems). For instance, in the long term, the collapse of a hypothetical matter cube will not result in a cube-shaped black hole. Instead, the resulting black hole will be indistinguishable from a black hole formed by the collapse of a spherical mass, but with one important difference: in its transition to a spherical shape, the black hole formed by the collapse of a cube will emit gravitational waves.
One of the most important aspects of general relativity is that it can be applied to the universe as a whole. A key point is that, on large scales, our universe appears to be constructed along very simple lines: All current observations suggest that, on average, the structure of the cosmos should be approximately the same, regardless of an observer's location or direction of observation: the universe is approximately homogeneous and isotropic. Such comparatively simple universes can be described by simple solutions of Einstein's equations. The current cosmological models of the universe are obtained by combining these simple solutions to general relativity with theories describing the properties of the universe's matter content, namely thermodynamics, nuclear- and particle physics. According to these models, our present universe emerged from an extremely dense high-temperature state (the Big Bang) roughly 14 billion years ago, and has been expanding ever since.
Einstein's equations can be generalized by adding a term called the cosmological constant. When this term is present, empty space itself acts as a source of attractive or, unusually, repulsive gravity. Einstein originally introduced this term in his pioneering 1917 paper on cosmology, with a very specific motivation: contemporary cosmological thought held the universe to be static, and the additional term was required for constructing static model universes within the framework of general relativity. When it became apparent that the universe is not static, but expanding, Einstein was quick to discard this additional term; prematurely, as we know today: From about 1998 on, a steadily accumulating body of astronomical evidence has shown that the expansion of the universe is accelerating in a way that suggests the presence of a cosmological constant or, equivalently, of a dark energy with specific properties that pervades all of space.
Modern research: general relativity and beyond
General relativity is very successful in providing a framework for accurate models which describe an impressive array of physical phenomena. On the other hand, there are many interesting open questions, and in particular, the theory as a whole is almost certainly incomplete.
In contrast to all other modern theories of fundamental interactions, general relativity is a classical theory: it does not include the effects of quantum physics. The quest for a quantum version of general relativity addresses one of the most fundamental open questions in physics. While there are promising candidates for such a theory of quantum gravity, notably string theory and loop quantum gravity, there is at present no consistent and complete theory. It has long been hoped that a theory of quantum gravity would also eliminate another problematic feature of general relativity: the presence of spacetime singularities. These singularities are boundaries ("sharp edges") of spacetime at which geometry becomes ill-defined, with the consequence that general relativity itself loses its predictive power. Furthermore, there are so-called singularity theorems which predict that such singularities must exist within the universe if the laws of general relativity were to hold without any quantum modifications. The best-known examples are the singularities associated with the model universes that describe black holes and the beginning of the universe.
Other attempts to modify general relativity have been made in the context of cosmology. In the modern cosmological models, most energy in the universe is in forms that have never been detected directly, namely dark energy and dark matter. There have been several controversial proposals to obviate the need for these enigmatic forms of matter and energy, by modifying the laws governing gravity and the dynamics of cosmic expansion, for example modified Newtonian dynamics.
Beyond the challenges of quantum effects and cosmology, research on general relativity is rich with possibilities for further exploration: mathematical relativists explore the nature of singularities and the fundamental properties of Einstein's equations, ever more comprehensive computer simulations of specific spacetimes (such as those describing merging black holes) are run, and the race for the first direct detection of gravitational waves continues apace. More than ninety years after the theory was first published, research is more active than ever. |
It is always a mystery about how the universe began, whether
if and when it will end. Astronomers construct hypotheses called
cosmological models that try to find the answer. There are two
types of models: Big Bang and Steady State. However, through
many observational evidences, the Big Bang theory can best
explain the creation of the universe.
The Big Bang model postulates that about 15 to 20 billion
years ago, the universe violently exploded into being, in an
event called the Big Bang. Before the Big Bang, all of the
matter and radiation of our present universe were packed together
in the primeval fireball–an extremely hot dense state from which
the universe rapidly expanded.1 The Big Bang was the start of
time and space.
The matter and radiation of that early stage
rapidly expanded and cooled. Several million years later, it
condensed into galaxies. The universe has continued to expand,
and the galaxies have continued moving away from each other ever
since. Today the universe is still expanding, as astronomers
The Steady State model says that the universe does not
evolve or change in time. There was no beginning in the past,
nor will there be change in the future. This model assumes the
perfect cosmological principle. This principle says that the
universe is the same everywhere on the large scale, at all
times.2 It maintains the same average density of matter forever.
There are observational evidences found that can prove the
Big Bang model is more reasonable than the Steady State model.
First, the redshifts of distant galaxies. Redshift is a Doppler
effect which states that if a galaxy is moving away, the spectral
line of that galaxy observed will have a shift to the red end.
The faster the galaxy moves, the more shift it has. If the
galaxy is moving closer, the spectral line will show a blue
shift. If the galaxy is not moving, there is no shift at all.
However, as astronomers observed, the more distance a galaxy is
located from Earth, the more redshift it shows on the spectrum.
This means the further a galaxy is, the faster it moves.
Therefore, the universe is expanding, and the Big Bang model
seems more reasonable than the Steady State model.
The second observational evidence is the radiation produced
by the Big Bang. The Big Bang model predicts that the universe
should still be filled with a small remnant of radiation left
over from the original violent explosion of the primeval fireball
in the past. The primeval fireball would have sent strong
shortwave radiation in all directions into space. In time, that
radiation would spread out, cool, and fill the expanding universe
By now it would strike Earth as microwave radiation.
In 1965 physicists Arno Penzias and Robert Wilson detected
microwave radiation coming equally from all directions in the
sky, day and night, all year.3 And so it appears that
astronomers have detected the fireball radiation that was
produced by the Big Bang. This casts serious doubt on the Steady
State model. The Steady State could not explain the existence of
this radiation, so the model cannot best explain the beginning of
Since the Big Bang model is the better model, the existence
and the future of the universe can also be explained. Around 15
to 20 billion years ago, time began. The points that were to
become the universe exploded in the primeval fireball called the
Big Bang. The exact nature of this explosion may never be known.
However, recent theoretical breakthroughs, based on the
principles of quantum theory, have suggested that space, and the
matter within it, masks an infinitesimal realm of utter chaos,
where events happen randomly, in a state called quantum
Before the universe began, this chaos was all there was. At
some time, a portion of this randomness happened to form a
bubble, with a temperature in excess of 10 to the power of 34
degrees Kelvin. Being that hot, naturally it expanded. For an
extremely brief and short period, billionths of billionths of a
second, it inflated. At the end of the period of inflation, the
universe may have a diameter of a few centimetres. The
temperature had cooled enough for particles of matter and
antimatter to form, and they instantly destroy each other,
producing fire and a thin haze of matter-apparently because
slightly more matter than antimatter was formed.5 The fireball,
and the smoke of its burning, was the universe at an age of
trillionth of a second.
The temperature of the expanding fireball dropped rapidly,
cooling to a few billion degrees in few minutes. Matter
continued to condense out of energy, first protons and neutrons,
then electrons, and finally neutrinos. After about an hour, the
temperature had dropped below a billion degrees, and protons and
neutrons combined and formed hydrogen, deuterium, helium. In a
billion years, this cloud of energy, atoms, and neutrinos had
cooled enough for galaxies to form. The expanding cloud cooled
still further until today, its temperature is a couple of degrees
above absolute zero.
In the future, the universe may end up in two possible
situations. From the initial Big Bang, the universe attained a
speed of expansion. If that speed is greater than the universe’s
own escape velocity, then the universe will not stop its
expansion. Such a universe is said to be open. If the velocity
of expansion is slower than the escape velocity, the universe
will eventually reach the limit of its outward thrust, just like
a ball thrown in the air comes to the top of its arc, slows,
stops, and starts to fall. The crash of the long fall may be the
Big Bang to the beginning of another universe, as the fireball
formed at the end of the contraction leaps outward in another
great expansion.6 Such a universe is said to be closed, and
If the universe has achieved escape velocity, it will
continue to expand forever. The stars will redden and die, the
universe will be like a limitless empty haze, expanding
infinitely into the darkness. This space will become even
emptier, as the fundamental particles of matter age, and decay
through time. As the years stretch on into infinity, nothing
will remain. A few primitive atoms such as positrons and
electrons will be orbiting each other at distances of hundreds of
astronomical units.7 These particles will spiral slowly toward
each other until touching, and they will vanish in the last flash
of light. After all, the Big Bang model is only an assumption.
No one knows for sure that exactly how the universe began and how
it will end. However, the Big Bang model is the most logical and
reasonable theory to explain the universe in modern science.
1. Dinah L. Mache, Astronomy, New York: John Wiley & Sons,
Inc., 1987. p. 128.
2. Ibid., p. 130.
3. Joseph Silk, The Big Bang, New York: W.H. Freeman and
Company, 1989. p. 60.
4. Terry Holt, The Universe Next Door, New York: Charles
Scribner’s Sons, 1985. p. 326.
5. Ibid., p. 327.
6. Charles J. Caes, Cosmology, The Search For The Order Of
The Universe, USA: Tab Books Inc., 1986. p. 72.
7. John Gribbin, In Search Of The Big Bang, New York: Bantam
Books, 1986. p. 273.
Boslough, John. Stephen Hawking’s Universe. New York: Cambridge
University Press, 1980.
Caes, J. Charles. Cosmology, The Search For The Order Of The
Universe. USA: Tab Books Inc., 1986.
Gribbin, John. In Search Of The Big Bang. New York: Bantam
Holt, Terry. The Universe Next Door. New York: Charles
Scribner’s Sons, 1985.
Kaufmann, J. William III. Astronomy: The Structure Of The
Universe. New York: Macmillan Publishing Co., Inc., 1977.
Mache, L. Dinah. Astronomy. New York: John Wiley & Sons, Inc.,
Silk, Joseph. The Big Bang. New York: W.H. Freeman and Company, |
NetLogo Models Library:
## WHAT IS IT?
This model describes how diffusion occurs between two adjacent solids.
Diffusion is one of the most important phenomena in fields such as biology, chemistry, geology, chemistry, engineering and physics. Interestingly, before becoming a famous for the Relativity Laws, Albert Einstein wrote extensively about diffusion, and was one of the first to connect diffusion to the Brownian motion of atoms.
Diffusion can take place in gases, liquids, or solids. In solids, particularly, diffusion occurs due to thermally-activated random motion of atoms - unless the material is at absolute zero temperature (zero Kelvin), individual atoms keep vibrating and eventually move within the material. One of the possible net effects of diffusion is that atoms move from regions of high concentration of one element to regions with low concentration, until the concentration is equal throughout the sample.
This model demonstrates a solid diffusion couple, such as copper and nickel. In a real laboratory, such experiment would take place at very high temperatures, for the process to take place in a reasonable amount of time (note that the diffusion coefficient varies exponentially with the inverse of the temperature). There are many mechanisms for diffusion in solids. In this model we demonstrate one of them, which is caused by missing atoms in the metal crystal. The locations, of the missing atoms are often called vacancies. Therefore, this type of diffusion mechanism is referred to as "vacancy diffusion". The extent to which the diffusion can happen depends on the temperature and the number of vacancies in the crystal.
In addition, there are various other conditions that are needed for solid diffusion to occur. Some examples of these are similar atomic size, similar crystal structure, and similar electronegativity. This model assumes all of these conditions are present.
## HOW IT WORKS
There are two types of atoms, green and blue. At the beginning, all green atoms are on the left and the blue atoms are on the right. All the vacancies start out between the two metals. As atoms move into vacancies, the vacancies disperse. In most real-world scenarios, vacancies are scattered in the material to begin with. In this model, for simplification purposes, we assume that the materials have no vacancies in the beginning, and that all the vacancies start off in between the two materials.
In this model we also assume that the heat is evenly distributed throughout the metals. Therefore, each atom has an equal chance of breaking bonds with its neighbors and moving to a vacancy.
## HOW TO USE IT
To run the model, first press the SETUP button, then press the GO button.
"Atoms by Column" is a distribution diagram of the two atom types. The other graph is a maximum diffusion distance, squared, versus time. If the model runs long enough, this plot will show an approximately linear relationship between the squared distance and time, following the known equation (for one-dimensional diffusion):
> x<sup>2</sup> = 2 * D * t
where x is the maximum diffusion distance, D is the diffusion coefficient, and t is elapsed time.
## THINGS TO NOTICE
If you run the model for a few hundred ticks, the distribution graph should look like two interleaving curves. The far edges remain purely one color, while the middle is about 50-50.
The other graph should be generally linear. The "diffusion coefficient" of the system is proportional to the slope, and can be easily calculated using the above equation.
## THINGS TO TRY
Let the model run for a long time. (You can use the speed slider to make the model run faster.) Do you think the metal will ever become completely diffused?
Try increasing the dimensions of the world. Does the behavior change at all?
## EXTENDING THE MODEL
The model uses a very simple initial state in which there is always exactly one column of vacancies and they are all located in the middle. Try adding settings that dictate how many vacancies there are and where they start out.
Give the two metals, or the two sides of the world, different characteristics. For example, a temperature difference could be simulated by making atomic movements on one side happen less often than on the other.
Try changing the crystal structure of the atoms. In close-packed atoms in two dimensions, atoms actually have six neighbors (hexagonal) instead of four (square).
## NETLOGO FEATURES
This model uses a non-wrapping world.
## RELATED MODELS
MaterialSim Grain Growth
GasLab Two Gas
## CREDITS AND REFERENCES
Thanks to James Newell for his work on this model.
For additional information:
Porter, D.A., and Easterling, K.E., Phase Transformations in Metals and Alloys, 2nd ed., Chapman & Hall, 1992
Shewmon, P.G., Diffusion in solids, 2nd ed., TMS, 1989
## HOW TO CITE
If you mention this model or the NetLogo software in a publication, we ask that you include the citations below.
For the model itself:
* Wilensky, U. (2007). NetLogo Solid Diffusion model. http://ccl.northwestern.edu/netlogo/models/SolidDiffusion. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
Please cite the NetLogo software as:
* Wilensky, U. (1999). NetLogo. http://ccl.northwestern.edu/netlogo/. Center for Connected Learning and Computer-Based Modeling, Northwestern University, Evanston, IL.
## COPYRIGHT AND LICENSE
Copyright 2007 Uri Wilensky.

This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at email@example.com. |
We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byCaroline Strickland
Modified over 5 years ago
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY
Slide 6.1- 2 © 2012 Pearson Education, Inc. INNER PRODUCT If u and v are vectors in, then we regard u and v as matrices. The transpose u T is a matrix, and the matrix product u T v is a matrix, which we write as a single real number (a scalar) without brackets. The number u T v is called the inner product of u and v, and it is written as. The inner product is also referred to as a dot product.
Slide 6.1- 3 © 2012 Pearson Education, Inc. INNER PRODUCT If and, then the inner product of u and v is.
Slide 6.1- 4 © 2012 Pearson Education, Inc. INNER PRODUCT Theorem 1: Let u, v, and w be vectors in, and let c be a scalar. Then a. b. c. d., and if and only if Properties (b) and (c) can be combined several times to produce the following useful rule:
Slide 6.1- 5 © 2012 Pearson Education, Inc. THE LENGTH OF A VECTOR If v is in, with entries v 1, …, v n, then the square root of is defined because is nonnegative. Definition: The length (or norm) of v is the nonnegative scalar defined by and Suppose v is in, say,.
Slide 6.1- 6 © 2012 Pearson Education, Inc. THE LENGTH OF A VECTOR If we identify v with a geometric point in the plane, as usual, then coincides with the standard notion of the length of the line segment from the origin to v. This follows from the Pythagorean Theorem applied to a triangle such as the one shown in the following figure. For any scalar c, the length cv is times the length of v. That is,
Slide 6.1- 7 © 2012 Pearson Education, Inc. THE LENGTH OF A VECTOR A vector whose length is 1 is called a unit vector. If we divide a nonzero vector v by its length—that is, multiply by —we obtain a unit vector u because the length of u is. The process of creating u from v is sometimes called normalizing v, and we say that u is in the same direction as v.
Slide 6.1- 8 © 2012 Pearson Education, Inc. THE LENGTH OF A VECTOR Example 1: Let. Find a unit vector u in the same direction as v. Solution: First, compute the length of v: Then, multiply v by to obtain
Slide 6.1- 9 © 2012 Pearson Education, Inc. DISTANCE IN To check that, it suffices to show that. Definition: For u and v in, the distance between u and v, written as dist (u, v), is the length of the vector. That is,
Slide 6.1- 10 © 2012 Pearson Education, Inc. DISTANCE IN Example 2: Compute the distance between the vectors and. Solution: Calculate The vectors u, v, and are shown in the figure on the next slide. When the vector is added to v, the result is u.
Slide 6.1- 11 © 2012 Pearson Education, Inc. DISTANCE IN Notice that the parallelogram in the above figure shows that the distance from u to v is the same as the distance from to 0.
Slide 6.1- 12 © 2012 Pearson Education, Inc. ORTHOGONAL VECTORS Consider or and two lines through the origin determined by vectors u and v. See the figure below. The two lines shown in the figure are geometrically perpendicular if and only if the distance from u to v is the same as the distance from u to. This is the same as requiring the squares of the distances to be the same.
Slide 6.1- 13 © 2012 Pearson Education, Inc. ORTHOGONAL VECTORS Now The same calculations with v and interchanged show that Theorem 1(b) Theorem 1(a), (b) Theorem 1(a)
Slide 6.1- 14 © 2012 Pearson Education, Inc. ORTHOGONAL VECTORS The two squared distances are equal if and only if, which happens if and only if. This calculation shows that when vectors u and v are identified with geometric points, the corresponding lines through the points and the origin are perpendicular if and only if. Definition: Two vectors u and v in are orthogonal (to each other) if. The zero vector is orthogonal to every vector in because for all v.
Slide 6.1- 15 © 2012 Pearson Education, Inc. THE PYTHOGOREAN THEOREM Theorem 2: Two vectors u and v are orthogonal if and only if. Orthogonal Complements If a vector z is orthogonal to every vector in a subspace W of, then z is said to be orthogonal to W. The set of all vectors z that are orthogonal to W is called the orthogonal complement of W and is denoted by (and read as “W perpendicular” or simply “W perp”).
Slide 6.1- 16 © 2012 Pearson Education, Inc. ORTHOGONAL COMPLEMENTS 1.A vector x is in if and only if x is orthogonal to every vector in a set that spans W. 2. is a subspace of. Theorem 3: Let A be an matrix. The orthogonal complement of the row space of A is the null space of A, and the orthogonal complement of the column space of A is the null space of A T : and
Slide 6.1- 17 © 2012 Pearson Education, Inc. ORTHOGONAL COMPLEMENTS Proof: The row-column rule for computing Ax shows that if x is in Nul A, then x is orthogonal to each row of A (with the rows treated as vectors in ). Since the rows of A span the row space, x is orthogonal to Row A. Conversely, if x is orthogonal to Row A, then x is certainly orthogonal to each row of A, and hence. This proves the first statement of the theorem.
Slide 6.1- 18 © 2012 Pearson Education, Inc. ORTHOGONAL COMPLEMENTS Since this statement is true for any matrix, it is true for A T. That is, the orthogonal complement of the row space of A T is the null space of A T. This proves the second statement, because.
Slide 6.1- 19 © 2012 Pearson Education, Inc. ANGLES IN AND (OPTIONAL) If u and v are nonzero vectors in either or, then there is a nice connection between their inner product and the angle between the two line segments from the origin to the points identified with u and v. The formula is ----(1) To verify this formula for vectors in, consider the triangle shown in the figure on the next slide with sides of lengths,,, and.
Slide 6.1- 20 © 2012 Pearson Education, Inc. ANGLES IN AND (OPTIONAL) By the law of cosines, which can be rearranged to produce the equations on the next slide.
Slide 6.1- 21 © 2012 Pearson Education, Inc. ANGLES IN AND (OPTIONAL) The verification for is similar. When, formula (1) may be used to define the angle between two vectors in. In statistics, the value of defined by (1) for suitable vectors u and v is called a correlation coefficient.
10.4 Complex Vector Spaces.
Chapter 4 Euclidean Vector Spaces
Euclidean m-Space & Linear Equations Euclidean m-space.
Linear Equations in Linear Algebra
Eigenvalues and Eigenvectors
Symmetric Matrices and Quadratic Forms
Chapter 5 Orthogonality
Orthogonality and Least Squares
4 4.6 © 2012 Pearson Education, Inc. Vector Spaces RANK.
Matrix Algebra THE INVERSE OF A MATRIX © 2012 Pearson Education, Inc.
6 6.1 © 2012 Pearson Education, Inc. Orthogonality and Least Squares INNER PRODUCT, LENGTH, AND ORTHOGONALITY.
6 6.3 © 2012 Pearson Education, Inc. Orthogonality and Least Squares ORTHOGONAL PROJECTIONS.
VECTORS AND THE GEOMETRY OF SPACE 12. VECTORS AND THE GEOMETRY OF SPACE So far, we have added two vectors and multiplied a vector by a scalar.
Chapter 9-Vectors Calculus, 2ed, by Blank & Krantz, Copyright 2011 by John Wiley & Sons, Inc, All Rights Reserved.
Chapter 5: The Orthogonality and Least Squares
Copyright © Cengage Learning. All rights reserved. 12 Vectors and the Geometry of Space.
CHAPTER FIVE Orthogonality Why orthogonal? Least square problem Accuracy of Numerical computation.
Linear Algebra Chapter 4 Vector Spaces.
© 2021 SlidePlayer.com Inc. All rights reserved. |
View your child's medical records and schedule appointments through our secure, online portal, day or night.
Most people have never heard of neuroblastoma, but it's actually the most common type of cancer in infants.
In this rare disease, a solid tumor (a lump or mass caused by uncontrolled or abnormal cell growth) is formed by special nerve cells called neuroblasts. Normally, these immature cells grow into functioning nerve cells. But in neuroblastoma, they become cancer cells instead.
Although neuroblastoma sometimes forms before a child is born, it usually isn't found until later, when the tumor begins to grow and affect the body. When neuroblastoma is found and treated in infancy, the chance of recovery is good.
Neuroblastoma most commonly starts in the tissue of the adrenal glands, the triangular glands on top of the kidneys that make hormones that control heart rate, blood pressure, and other important functions. Like other cancers, neuroblastoma can spread (metastasize) to other parts of the body, such as the lymph nodes, skin, liver, and bones.
In a few cases, the tendency to get this type of cancer can be passed down from a parent to a child (called the familial type). But most cases of neuroblastoma (98%) are not inherited (called the sporadic type). It happens almost exclusively in infants and children, and is slightly more common in boys than in girls.
Children diagnosed with neuroblastoma are usually younger than 5 years old, and most new cases happen in children younger than 2. About 700 new cases of neuroblastoma are diagnosed each year in the United States.
Neuroblastoma happens when neuroblasts grow and divide out of control instead of developing into nerve cells. The exact cause of this abnormal growth is not known, but scientists believe a defect in the genes of a neuroblast allows it to divide uncontrollably.
Signs and Symptoms
The effects of neuroblastoma can be different depending on where the disease first started, how much the cancer has grown, and how much it has spread to other parts of the body.
The first symptoms are often vague and may include irritability, being very tired, loss of appetite, and fever. But because these early signs can develop slowly and be similar to symptoms of other common childhood illnesses, neuroblastoma can be difficult to diagnose.
In young children, neuroblastoma often is discovered when a parent or doctor feels an unusual lump or mass somewhere in the child's body — most often in the abdomen, though tumors also can appear in the neck, chest, and elsewhere.
The most common signs of neuroblastoma happen when the growing tumor presses on nearby tissues or when the cancer spreads to other areas. For example, a child may have:
- a swollen stomach, abdominal pain, and decreased appetite (if the tumor is in the abdomen)
- bone pain or soreness, black eyes, bruises, and pale skin (if the cancer has spread to the bones or bone marrow)
- weakness, numbness, inability to move a body part, or difficulty walking (if the cancer presses on the spinal cord)
- drooping eyelid, unequal pupils, sweating, and red skin, which are signs of nerve damage in the neck known as Horner's syndrome (if the tumor is in the neck)
- difficulty breathing (if the cancer is in the chest)
- fever and irritability
If a doctor suspects neuroblastoma, your child may undergo tests to confirm the diagnosis and rule out other causes of symptoms. These tests may include simple urine tests and blood tests, imaging studies (such as X-rays, a CT scan, an MRI, an ultrasound, and a bone scan), and a biopsy (removal and examination of a tissue sample).
These tests help to find the location and size of the original (primary) tumor and see whether it has spread to other areas of the body, a process called staging. Other tests, such as a bone marrow aspiration and biopsy, also might be done.
The doctor might order an MIBG scan. In this imaging test, MIBG (iodine-meta-iodobenzyl-guanidine, a type of radioactive material) is injected into the blood and attaches to neuroblastoma cells. This lets the doctor see if the neuroblastoma has spread to other parts of the body. MIBG is also used at higher doses to treat neuroblastoma, and can be used for scanning after treatment to see if any cancer cells remain.
In rare cases, neuroblastoma may be detected by ultrasound before birth.
How neuroblastoma is treated depends on factors that determine risk, such as the child's age, the characteristics of the tumor, and whether the cancer has spread.
The three risk groups are: low risk, intermediate risk, and high risk. Children with low-risk or intermediate-risk neuroblastoma have a good chance of being cured. However, more than half of all children with neuroblastoma have the high-risk type, which can be difficult to cure.
Because some cases of neuroblastoma disappear on their own without treatment, doctors also sometimes use "watchful waiting" before trying other treatments.
Unfortunately, in most cases the neuroblastoma has spread by the time it's diagnosed. In these cases, chemotherapy and surgery are usually the primary treatments and may be done along with radiation therapy and stem cell transplants.
Another treatment the doctor might suggest is retinoid therapy. (Retinoids are substances that work in the body much like vitamin A.) Scientists believe that retinoids can help cure neuroblastoma by encouraging cancer cells to turn into mature nerve cells. Retinoids are often used after other treatments to help prevent the cancer from growing back.
Other treatment options include tumor vaccines and immunotherapy using monoclonal antibodies. Monoclonal antibodies are special substances that can be injected into the body to seek out and attach to cancer cells. They're sometimes used to stimulate the immune system to attack the neuroblastoma cancer cells.
With treatment, many children with neuroblastoma have a good chance of surviving. In general, neuroblastoma has a better outcome if the cancer hasn't spread or if the child is younger than 1 year old when diagnosed.
High-risk neuroblastoma is harder to cure and is more likely to become resistant to standard therapies or come back (recur) after initially successful treatment.
"Late effects" are problems that patients can develop after cancer treatments have ended. Late effects of neuroblastoma include growth and developmental delays and loss of function in involved organs. Hearing loss is common. The risk of developing late effects depends on things like the specific treatments used and the child's age during treatment.
Although rare, some kids with neuroblastoma develop opsoclonus-myoclonus syndrome, a condition where the immune system attacks normal nerve tissue. As a result, some might have learning disabilities, delays in muscle and movement development, language problems, and behavioral problems.
Children treated for neuroblastoma also may be at higher risk for other cancers.
Caring for Your Child
Being told your child has neuroblastoma can be overwhelming, and cancer treatment can take a huge toll on your child and family. At times, you might feel helpless.
But you play a vital role in your child's treatment. During this difficult time, it's important to learn as much as you can about neuroblastoma and its treatment. Being knowledgeable will help you make informed decisions and better help your child cope with the tests and treatments. Don't be afraid to ask the doctors questions.
Although you might feel like it at times, you're not alone. It can help to find a support group for parents whose kids are coping with cancer (there are groups specifically for parents of children with neuroblastoma).
Parents often struggle with how much to tell a child who's diagnosed with cancer. While there's no perfect answer, experts agree that it's best to be honest — but to tailor the details to your child's degree of understanding and emotional maturity. Give as much information as your child needs, but not more.
And when explaining treatment, try to break it down into steps. Addressing each part as it comes — visiting various doctors, having a special machine take pictures of the body, needing an operation — can make the big picture less scary. Be sure to explain to your child that the disease is not the result of anything he or she did.
Also remember that it's common for siblings to feel neglected, jealous, and angry when a child is seriously ill. Explain as much as they can understand, and involve family members, teachers, and friends to help keep some sense of normalcy for them.
And finally, as hard as it may be, try to take care of yourself. Parents who get the support they need are better able to support their children.
Reviewed by: Eric S. Sandler, MD
Date reviewed: January 06, 2017 |
F-10 Curriculum (V8)
F-10 Curriculum (V9)
Tools and resources
Solve problems involving division by a one digit number, including those that result in a remainder (ACMNA101)
| 17 other related resources
Showing the top 20 search results
Use a dividing tool to make equal shares of biscuits and toys in a pet shop. For example, share 34 biscuits equally between 6 puppies. Predict how many items each puppy will get, or how many packets can be filled. Check your prediction. Decide what to do with any leftovers. Complete a sentence describing the number operations.
Use a dividing tool to make equal shares of stationery such as pens, pencils or crayons. Complete a sentence describing a number operation. For example, pack 24 crayons into packets of 5. Predict how many packets are needed and identify how many items are left over.
Selected links to a range of interactive online resources for the study of number in Foundation to Year 6 Mathematics.
Explore facts about the life of cassowaries: physical characteristics; diet; habitat; life cycles; and locations. Interact with graphs to see how much people can help cassowaries. Work through ecology notes and resources. Answer questions as you go; express your answers as fractions. This learning object is one in a series ...
Reducing carbon dioxide emissions and sustainable energy use and are two of the major issues facing the world today. This project explores energy use in homes, and compares individual energy use with the class average and calculate and graph CO2 emissions.
This tutorial is suitable for use with a screen reader. It explains how to split up numbers in your head when finding the difference between two numbers such as 26 and 73. Work through sample questions and instructions explaining how to use linear partitioning techniques. Find the difference between pairs of numbers. Split ...
Help a town planner to design two site plans for a school. Assign regions on a 10x10 grid for different uses such as a playground, canteen, car park or lawn. Calculate the percentage of the total site used for each region. Use a number line to display fractions and equivalent fractions.
Solve divisions such as 147/7 or 157/6 (some have remainders). Use a partitioning tool to help solve randomly generated divisions. Learn strategies to do complex arithmetic in your head. Split a division into parts that are easy to work with, use times tables, then solve the original calculation.
This tutorial is suitable for use with a screen reader. It explains how the use of simple words can describe the likelihood of everyday events. How likely is an event: certain, likely, equal chance, unlikely or certainly not? Answer some questions using these words and then build your own examples. Learn how to describe ...
These seven learning activities, which focus on 'open-ended tasks' using a variety of tools (software) and devices (hardware), illustrate the ways in which content, pedagogy and technology can be successfully and effectively integrated in order to promote learning.
In the activities, teachers use investigations in order ...
This series of three lessons explores the relationship between area and perimeter using the context of bumper cars at an amusement park. Students design a rectangular floor plan with the largest possible area with a given perimeter. They then explore the perimeter of a bumper car ride that has a set floor area and investigate ...
This tutorial is suitable for use with a screen reader. It explains strategies for solving complex multiplications in your head such as 22x38. Work through sample questions and instructions explaining how to use partitioning techniques. Solve multiplications by breaking them up into parts that are easy to work with, use ...
Learn a cool trick using the concept of the mean (or average). Pick any 3 x 3 block of dates on a monthly calendar. The number in the middle square is the mean of the nine numbers that form the 3 x 3 square. If you add all the numbers and divide the total by nine (the number of squares), the answer is the number in the ...
Did you know that in Australia we use a metric system for measurement? See if you know the units of measurement for length, mass and volume. Find out what system the United States uses. You guessed it - they don't use the metric system! See how a mix up of these units can cause all kinds of mess ups.
Amaze your friends with your super mind-reading skills. Here’s a brain game you can play by asking a few questions and substituting letters for numbers! Learn to follow a specific sequence of arithmetical steps to always arrive at the same answer.
What is a quarter? You get quarters when you divide a whole into four equal parts. Each one of these four parts is a quarter. Watch this great explainer produced by Monique in collaboration with ABC Splash and see how she explains quarters.
Did you know that the digits on opposite faces of dice will always add up to seven? Use dice as fun tools to reinforce fact families of seven, multiples of seven and subtraction skills.
Follow these simple calculations to illustrate the special properties of the number 9. Pick your favourite number between 1 and 9 and multiply that number by 3. Add 3 to your answer. Multiply the result by 3. Treat your two-digit answer as two separate numbers and add them together. No matter what number you pick to start ...
Can maths really help to save lives? In this clip we see some real life applications of mathematics. Some are about helping to save lives others are about how maths can be useful. What do Florence Nightingale and WHO, the World Health Organisation have in common?
This teacher resource describes how 74 public schools in metropolitan, regional and rural Western Australia used three major components of the school improvement cycle to achieve significant improvement in the literacy and numeracy learning outcomes of their students. The resource is organised in nine sections: Summary, ... |
The new virus known as SARs–CoV-2, or Covid-19, is one of a family of viruses termed coronaviruses. These viruses were first identified in around the mid-1960s and are named after the characteristic shape of their outer surface, resembling a halo or crown.
The first Severe Acute Respiratory Syndrome (SARs) coronavirus (SARs-Cov-1) emerged in 2003, while another coronavirus causing significant disease, the Middle East Respiratory Syndrome (MERS) coronavirus, was first identified in 2012.
As of 11 May, official figures show there have been 219,183 confirmed cases of Covid-19 and 31,855 related deaths across the UK.1 The virus is primarily spread through droplets from the nose or mouth when a person with Covid-19 coughs, sneezes or speaks.2
The World Health Organisation (WHO) and governments are promoting measures, such as social distancing and regular handwashing, to help reduce transmission in the community.
While the majority of infections are relatively mild, the WHO estimates almost 40% of people affected require hospitalisation and around one in 20 people require ICU treatment.3
It is widely understood that elderly people are most susceptible to severe illness from Covid-19, and that people with underlying conditions, such as diabetes and respiratory diseases, are also particularly vulnerable.
There are also stark disparities in the severity of disease between men and women, and among different ethnic groups.
It is important that healthcare professionals and other keyworkers understand and are aware of these disparities in risk, when assessing and advising patients.
These factors should be considered to help sensitive discussions with patients and colleagues about prevention measures, as well as in assessing risk of severe disease and complications. Knowledge and policies are changing daily, so as healthcare professionals we need to keep well informed with current advice.
How is Covid-19 affecting men more than women?
Various reports have shown that men are at greater risk from Covid-19 than women. A recent study from the Office of National Statistics (ONS) showed that among people of working age, death rates due to Covid-19 were nearly twice as high in men, with 9.9 deaths per 100,000 in men compared with 5.5 per 100,000 in women.4
A number of theories are emerging as to why men may be more at risk from the virus. The disparity may be at least partly down to basic biology. For example, evidence indicates men have more angiotensin converting enzyme (ACE)-2 receptors – which allow Covid-19 to enter cells – in lungs and other tissues than women. 5,6
Genetic differences may also affect sex-specific outcomes to Covid-19 infection. The sex hormone testosterone, more highly expressed in men than women, is known to suppress the immune system.7
In addition, certain X chromosome-linked immune factors are more highly expressed in women than men. This includes the Toll–like receptors (TLRs) and in particular TLR7 that may be important in the anti-viral immune response.8,9
Other factors thought to contribute to the disparity between men and women include lifestyle behaviours, such as frequency of hand washing, smoking and diet.10
Body shape may also be important – men typically being more ‘apple-shaped’ with a higher waste to hip ratio than women, who are more typically ‘pear-shaped’. This means more fat is distributed on the torso among men, which can increase pressure on respiratory muscles.
How are BAME groups at increased risk from Covid-19?
Mounting evidence indicates that Black, Asian and Minority Ethnic (BAME) people are disproportionately affected by Covid-19.
A recent ONS analysis showed that, taking into account age differences, black men and women were four times as likely to die from the virus when compared with white people. Those of Bangladeshi and Pakistani origin were over three times at risk and Indians were at twice the risk.11
The UK Government has now set up a scientific review looking at how ethnicity, along with other factors, may be linked to an increased risk of morbidity and mortality compared with white people, and there have been calls for a wider inquiry.12,13
A number of factors have been put forward as potential contributors to an increased risk in BAME populations. One is that BAME people are more likely to live crowded conditions and experience more poverty, which contributes to poorer health generally and can make it harder for people to adhere to physical distancing and isolation measures.
BAME groups also have a higher incidence of certain illnesses including heart disease, diabetes and high blood pressure, underlying health conditions that may put them at increased risk of severe illness due to Covid-19.14
However, a recent Oxford University study found that BAME people were still at twice the risk of dying from Covid-19 compared with white people, even after adjusting for underlying conditions and deprivation.15
The study authors suggest other possible explanations for the increased risk among BAME groups may ‘relate to higher infection risk, including over-representation in “front-line” professions with higher exposure to infection, or higher household density’.
Other experts have also noted that the over-representation of BAME groups in higher risk occupations such as health and care work, the transport sector and shop work may be a key contributory factor.16
Role of nurses in educating people and taking history
Nurses can play an important role in educating patients and the public about prevention generally but also the increased risks faced by certain groups, and offering targeted advice.
Those in high risk groups should be provided with the relevant information and health promotion material should be adapted according to target groups. Be mindful that different minority groups may use specific media rather than mainstream channels. There may also be language and communication problems.
Patients should be advised about the importance of minimising risk in crowded conditions, for example not sharing eating utensils and spacing living arrangements as far as is practically possible.
During general consultations healthcare professionals can highlight health promotion messages – for example, the importance of hand washing and maintaining a physical distance. It is also important to offer advice about healthy eating and taking exercise to prevent weight gain during this period of physical distancing, as well as to promote mental health and wellbeing. A structured routine may help patients to undertake daily exercise regimes and help to normalise daily living under distancing and isolation measures.
There are a range of resources available online – patients can be directed to resources listed at the end of this article. The Royal College of Nursing (RCN) has also produced guidance for healthcare providers from the BAME community.
How to keep updated
There are still many uncertainties about the Covid-19 pandemic. For example, experts are unsure yet whether robust immunity is gained after exposure to SARs-CoV-2 and for how long any possible immunity lasts.
Make sure you are aware of the latest official guidance from the Government, NHS England and Public Health England. In keeping yourself informed more widely, be aware that some media channels can be unhelpful, providing potentially inaccurate or politically biased information.
These are not normal times, when we can use NICE guidelines as the foundation for our decision making. NICE have, however, published rapid guidelines for different specific conditions.
New theories about Covid-19 infection control strategies and treatments are being debated daily so while it is important to be aware of developments, it is also important to be mindful that most new research findings are not yet published in peer-reviewed journals.
- Keep well informed, so that you are equipped to give timely advice and support
- Maintain a sense of optimism and promote the importance of self-care
- Consider the risk factors of each individual patient and tailor your care and support appropriately
- Practice should be in line with the best available evidence
- Work within your competence
- Practice what you preach – others will take note.
3. WHO press statement April 2020
6. Sama I, Ravera A, Santema B et al. Circulating plasma concentrations of angiotensin-converting enzyme 2 in men and women with heart failure and effects of renin–angiotensin–aldosterone inhibitors. Eur Heart J. Early online publication: 10 May 2020
10. Rieker P and Bird C. Rethinking Gender Differences in Health: Why We Need to Integrate Social and Biological Perspectives. The Journals of Gerontology: Series B, Volume 60, Issue Special_Issue_2, 1 October 2005, Pages S40–S47
15. The OpenSAFELY Collaborative, Williamson E, Walker A et al. OpenSAFELY: factors associated with COVID-19-related hospital death in the linked electronic health records of 17 million adult NHS patients. medRxiv preprint posted 7 May 2020
British Nutrition Foundation (BNF) March 2020. BNF busts the myths on nutrition and COVID-19. |
How can all three of these liquids be rooted in the same science? The key is in the molecule. A molecule’s identity has to do with its character – a character that defines its many properties.
Take physical properties, for instance. They are largely governed by intermolecular forces, or forces of attraction that exist between one molecule and another molecule. Intermolecular forces are important because their strength explains how physical properties, like boiling point, can have a powerful impact on a liquid.
When does a liquid boil? When its molecules have enough energy to break free of the attractions between those molecules.
With this knowledge in mind, 3M scientists went to the lab to create phase-shifting fluids that have the ability to do vastly different things, like clean, cool and protect, with slight changes in their boiling points.
They’re called 3M™ Novec™ fluids.
Novec compounds combine certain properties, like non-flammability and low toxicity, all into a family of molecules.
“All of the pure Novec fluids have different boiling points due to their different chemical structures and molecular weights,” says John Owens, who is a lead research specialist working with 3M’s Novec fluids.
“This range of boiling points was created in order to optimize the performance of the fluids in different applications. Some applications require higher volatility with a material that evaporates more quickly. For these applications, we would select a fluid with a lower boiling point,” says John. “Other applications require transferring heat at relatively high temperatures. For those applications, we developed the fluids with higher boiling points so that this could be accomplished at reasonable working pressures.”
Because of this, many industries like aerospace, electronics and health care rely on Novec fluids to clean and Novec coatings to help protect devices. Libraries and museums use it to help preserve national treasures. The Smithsonian Institution National Museum of Natural History even uses it to help preserve the world’s largest squid specimen. The Library of Congress, the nation’s oldest federal cultural institution, is among the list of places using a Novec fluid to help protect documents from fire risks.
So, what does it take to create molecules with this many characteristics and uses?
3M scientists and engineers from across the globe collaborated to create what is now the broadest category of Novec fluids: segregated hydrofluoroethers (HFEs) – basically, a molecule with a fluorocarbon on one side and a hydrocarbon on the other side, connected by an oxygen atom (the ether).
It’s this chemical makeup that allows Novec fluids to comply with current ozone-protecting and global-warming regulations – revealing the very impetus for why Novec solutions were created.
The scientists put these atoms together in a way that would help solve some environmental challenges that were gaining attention in the early ‘90s.
“People were looking for replacements for chemicals that were ozone-depleting substances,” says John.
That’s because the ozone-depleting substances contained chlorofluorocarbons (CFCs), which were the cleaning medium of choice at the time – but, environmental regulations caused these solvents to begin being phased out in the mid-1990s because of their high ozone-depletion potential (ODP).
The solution? Scientists looked to create a solvent that could perform similarly to CFCs, but without creating environmental issues in its use.
“While we were trying to replace an ozone-depleting chemical, we wanted to make sure it didn’t contribute to some other environmental concern or create a safety risk,” says John.
So, scientists set out to invent materials that had zero ozone depletion-potential, but also looked ahead to create materials designed to be low in global-warming potential, with low greenhouse gas emissions.
Since the ‘90s, a new need has emerged: in data centers – where the amount of data carried is on the rise.
“We see that trend continuing, and it is creating a burden on the grid in terms of the electricity and space required by the servers,” says Jim Ehle, who is a business manager for Novec fluids at 3M.
The data centers consume a lot of energy, and because there is a great amount of heat generated through the operation of the computers, cooling is critical to both performance and energy efficiency.
The current solution is to air-cool the servers, but this solution can require a lot of energy to cool and flow the air across the server. It can also be corrosive to servers in places where air pollution is high. Companies, like 3M, are making headway in finding other solutions – like developing commercial solutions for liquid immersion cooling.
“You can dramatically reduce this energy use through liquid cooling,” says Jim.
With immersion cooling, the liquid does the cooling passively instead of using additional energy to blow air across the boards. This can result in a much smaller equipment footprint and a significantly smaller environmental footprint.
Immersion cooling allows tighter packing of components, enabling up to 100 kilowatts of computing power per square meter, compared to just 10 kilowatts in a typical air-cooled system. This means the data center could be housed in 10 times less floor space.
There’s another Novec fluid used in data centers, but for a different purpose: fire suppression. The benefit of this fluid is that it doesn’t damage electronics the way a water-based system can.
“In data centers, it’s critical that service isn’t interrupted. If you only have a water sprinkler system during a fire, the sprinkler will go off and damage your equipment,” says Jim. “This will prevent the data center’s business continuity from running efficiently.”
But, the outcome is different with 3M™ Novec™ 1230 Fire Protection Fluid.
“With Novec 1230 fluid, the fire will be extinguished, and the data center can keep running. It is designed to not damage the electronic equipment so you can sustain your operations,” says Jim.
Traditionally, manufacturers use water to clean precision parts – but they have to load the water with things like detergents in order for it to be able to effectively clean through tight spaces. Novec fluid is designed to do the same job, but without any help from detergents.
Imagine if you wanted to get a layer of wax removed from a piece of glass, for instance. If you place the glass into a vapor degreaser that contains Novec fluid, the wax can easily dissolve, even in the vapor phase.
When heated above its boiling point, the Novec fluid is transformed into a gas that is very heavy. It has a density many times that of air, so it just wants to sit down on top of the liquid in the tank.
When immersing parts in the Novec fluid inside the vapor degreaser, you’ll see tiny implosions appearing within the fluid through a process called ultrasonication, where sound energy is used to agitate particles in a sample.
“A sonic transducer cycles 40,000 times a second and creates little bubbles inside the fluid. As those bubbles pop, they provide mechanical agitation to the part, removing the wax,” says Karl Manske, a technical service specialist who works in the lab with Novec fluids at 3M.
But, that doesn’t happen without help from the Novec fluid, which will transform from a liquid phase to a vapor phase. This transformation allows the Novec fluid to gain the power it needs to immediately condense onto the glass, allowing the wax to melt and dissolve off the glass.
“For parts that have a lot of intricate architecture with blind holes or tight spaces where you have to clean in between spaces, the Novec fluid will wet in between all these areas,” says Karl.
If you want to keep wetness out of areas, Novec coatings can also be a surprising solution.
For instance, disguised within some smartphones, you may find a Novec coating on one of your phone’s most important components: the circuit board. Novec electronic grade coatings provide a fluorinated polymer barrier to water and corrosion on circuit boards.
This is especially helpful since we all want to avoid circuit board shorts caused to our phones by water and moisture exposure.
The effectiveness of the coating has to do with surface tension. The Novec fluids that the coatings are delivered from have a low surface tension. “We dissolve into the Novec solvent a fluorinated polymer coating. This low viscosity, low surface-tension coating solution gets in and around all components on a circuit board, depositing a layer of Novec coating providing good coverage,” says 3M chemist Greg Marszalek.
We compared an uncoated glass microscope slide dipped in water to a glass microscope slide dipped in a Novec electronic grade coating.
See how a drop of water reacts to both slides.
On the uncoated glass, you’ll notice how the water goes right underneath the glass. But, on the coated glass, you’ll see how the water beads up and doesn’t go underneath the glass.
“The coating adds a hydrophobic barrier, so if you get water or moisture on your board, the water will just run off,” says Greg.
Watch an experiment showing the hydrophobic nature of a Novec electronic grade coating.
Scientists have their eyes set on making the grid even greener.
“Where we once replaced materials that were ozone-depleting or flammable or had concerns over toxicity, we’re now replacing materials that are potent greenhouse gases,” says John.
That means creating solutions that are more effective than their alternatives, like sulfur hexafluoride (SF6) – a relatively high-pressure gas used in protecting high-voltage power equipment down on the electric grid. It’s found in transformers, switch gears and circuit breakers. “All these pieces of equipment are filled with SF6,” says John. “The industry knows they need to replace that material, because it’s the most potent greenhouse gas ever identified.”
One solution? “We’re able to use low-pressure fluorinated gases mixed with more common gases, like dry air or carbon dioxide, to successfully replace the sulfur hexafluoride,” explains John.
There’s also a lot of excitement around the electric car revolution. One big industry goal is to improve the batteries of electric vehicles (EV) by enhancing the performance of EVs through thermal management of the EV battery. “Novec fluids are being evaluated for managing the temperature of the EV batteries, and by doing so, may extend battery cell life and allow faster charge and discharge times,” says Jim.
And one day, Novec fluids may allow the worlds of data centers and driverless vehicles to intersect. One of the new applications for high-density data centers will be driverless vehicles.
“There will be massive amounts of data going between the data center and autonomous vehicles,” says Jim. “We think those data centers will need to be liquid-immersion cooled. Air cooling won’t be sufficient because the amount of data being transmitted will cause an electrical and space burden on the grid.”
John has seen Novec fluids through their entire journey, and is excited for the future.
“Originally, we didn’t immediately get to these molecules. There was a lot of testing. There were a number of materials that weren’t successful,” he says. “You often learn just as much from the things that fail as you do from the things that do have success. You learn what to do next.” |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.